Test Report: Docker_Linux_crio 21683

                    
                      cf2611189ddf0f856b4ad9653dc441b770ddd00e:2025-10-02:41739
                    
                

Test fail (56/166)

Order failed test Duration
27 TestAddons/Setup 517.17
38 TestErrorSpam/setup 498.9
47 TestFunctional/serial/StartWithProxy 497.47
49 TestFunctional/serial/SoftStart 368.8
51 TestFunctional/serial/KubectlGetPods 1.97
61 TestFunctional/serial/MinikubeKubectlCmd 2.01
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 2
63 TestFunctional/serial/ExtraConfig 733.62
64 TestFunctional/serial/ComponentHealth 1.75
67 TestFunctional/serial/InvalidService 0.06
70 TestFunctional/parallel/DashboardCmd 1.59
73 TestFunctional/parallel/StatusCmd 2.16
77 TestFunctional/parallel/ServiceCmdConnect 1.99
79 TestFunctional/parallel/PersistentVolumeClaim 241.45
83 TestFunctional/parallel/MySQL 2.33
89 TestFunctional/parallel/NodeLabels 1.29
105 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.02
106 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.07
108 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.34
109 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.11
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0.07
113 TestFunctional/parallel/MountCmd/any-port 2.28
114 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 111.12
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
120 TestFunctional/parallel/ServiceCmd/DeployApp 0.05
121 TestFunctional/parallel/ServiceCmd/List 0.25
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.23
125 TestFunctional/parallel/ServiceCmd/Format 0.25
126 TestFunctional/parallel/ServiceCmd/URL 0.24
141 TestMultiControlPlane/serial/StartCluster 502.13
142 TestMultiControlPlane/serial/DeployApp 98.7
143 TestMultiControlPlane/serial/PingHostFromPods 1.29
144 TestMultiControlPlane/serial/AddWorkerNode 1.46
145 TestMultiControlPlane/serial/NodeLabels 1.28
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.5
147 TestMultiControlPlane/serial/CopyFile 1.5
148 TestMultiControlPlane/serial/StopSecondaryNode 1.56
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.51
150 TestMultiControlPlane/serial/RestartSecondaryNode 48.01
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.49
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 369.74
153 TestMultiControlPlane/serial/DeleteSecondaryNode 1.74
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.5
155 TestMultiControlPlane/serial/StopCluster 1.35
156 TestMultiControlPlane/serial/RestartCluster 368.46
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.48
158 TestMultiControlPlane/serial/AddSecondaryNode 1.43
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.49
163 TestJSONOutput/start/Command 497.72
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestMinikubeProfile 502.42
221 TestMultiNode/serial/ValidateNameConflict 7200.051
x
+
TestAddons/Setup (517.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-486748 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-486748 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m37.13871256s)

                                                
                                                
-- stdout --
	* [addons-486748] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "addons-486748" primary control-plane node in "addons-486748" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 19:46:49.097080   14172 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:46:49.097331   14172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:46:49.097342   14172 out.go:374] Setting ErrFile to fd 2...
	I1002 19:46:49.097347   14172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:46:49.097531   14172 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 19:46:49.098069   14172 out.go:368] Setting JSON to false
	I1002 19:46:49.098897   14172 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1758,"bootTime":1759432651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:46:49.098983   14172 start.go:140] virtualization: kvm guest
	I1002 19:46:49.100823   14172 out.go:179] * [addons-486748] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 19:46:49.102124   14172 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 19:46:49.102192   14172 notify.go:221] Checking for updates...
	I1002 19:46:49.104547   14172 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:46:49.105783   14172 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 19:46:49.106797   14172 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 19:46:49.107825   14172 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 19:46:49.108854   14172 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:46:49.110054   14172 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:46:49.133310   14172 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 19:46:49.133424   14172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:46:49.185386   14172 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-02 19:46:49.175608895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 19:46:49.185522   14172 docker.go:319] overlay module found
	I1002 19:46:49.187247   14172 out.go:179] * Using the docker driver based on user configuration
	I1002 19:46:49.188768   14172 start.go:306] selected driver: docker
	I1002 19:46:49.188791   14172 start.go:936] validating driver "docker" against <nil>
	I1002 19:46:49.188804   14172 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:46:49.189411   14172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:46:49.241362   14172 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-02 19:46:49.231985659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 19:46:49.241534   14172 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 19:46:49.241822   14172 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 19:46:49.243528   14172 out.go:179] * Using Docker driver with root privileges
	I1002 19:46:49.244808   14172 cni.go:84] Creating CNI manager for ""
	I1002 19:46:49.244878   14172 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 19:46:49.244890   14172 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 19:46:49.244961   14172 start.go:350] cluster config:
	{Name:addons-486748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-486748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 19:46:49.246245   14172 out.go:179] * Starting "addons-486748" primary control-plane node in "addons-486748" cluster
	I1002 19:46:49.247401   14172 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 19:46:49.248554   14172 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 19:46:49.249738   14172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 19:46:49.249768   14172 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 19:46:49.249791   14172 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 19:46:49.249808   14172 cache.go:59] Caching tarball of preloaded images
	I1002 19:46:49.249928   14172 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 19:46:49.249944   14172 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 19:46:49.250350   14172 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/config.json ...
	I1002 19:46:49.250376   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/config.json: {Name:mk00a5c747d89203b93c17e2728b3edb4ad2afc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:46:49.266988   14172 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 19:46:49.267112   14172 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 19:46:49.267137   14172 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 19:46:49.267141   14172 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 19:46:49.267149   14172 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 19:46:49.267156   14172 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 19:47:01.617794   14172 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 19:47:01.617834   14172 cache.go:233] Successfully downloaded all kic artifacts
	I1002 19:47:01.617871   14172 start.go:361] acquireMachinesLock for addons-486748: {Name:mk12f88a4445be3b9140c03872d799e59dbb6f60 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:47:01.617984   14172 start.go:365] duration metric: took 91.828µs to acquireMachinesLock for "addons-486748"
	I1002 19:47:01.618017   14172 start.go:94] Provisioning new machine with config: &{Name:addons-486748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-486748 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 19:47:01.618138   14172 start.go:126] createHost starting for "" (driver="docker")
	I1002 19:47:01.620051   14172 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 19:47:01.620359   14172 start.go:160] libmachine.API.Create for "addons-486748" (driver="docker")
	I1002 19:47:01.620396   14172 client.go:168] LocalClient.Create starting
	I1002 19:47:01.620512   14172 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 19:47:01.666865   14172 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 19:47:01.895395   14172 cli_runner.go:164] Run: docker network inspect addons-486748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 19:47:01.911777   14172 cli_runner.go:211] docker network inspect addons-486748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 19:47:01.911849   14172 network_create.go:284] running [docker network inspect addons-486748] to gather additional debugging logs...
	I1002 19:47:01.911869   14172 cli_runner.go:164] Run: docker network inspect addons-486748
	W1002 19:47:01.927622   14172 cli_runner.go:211] docker network inspect addons-486748 returned with exit code 1
	I1002 19:47:01.927662   14172 network_create.go:287] error running [docker network inspect addons-486748]: docker network inspect addons-486748: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-486748 not found
	I1002 19:47:01.927685   14172 network_create.go:289] output of [docker network inspect addons-486748]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-486748 not found
	
	** /stderr **
	I1002 19:47:01.927823   14172 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 19:47:01.944235   14172 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00029d000}
	I1002 19:47:01.944292   14172 network_create.go:124] attempt to create docker network addons-486748 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 19:47:01.944342   14172 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-486748 addons-486748
	I1002 19:47:01.999393   14172 network_create.go:108] docker network addons-486748 192.168.49.0/24 created
	I1002 19:47:01.999423   14172 kic.go:121] calculated static IP "192.168.49.2" for the "addons-486748" container
	I1002 19:47:01.999476   14172 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 19:47:02.015113   14172 cli_runner.go:164] Run: docker volume create addons-486748 --label name.minikube.sigs.k8s.io=addons-486748 --label created_by.minikube.sigs.k8s.io=true
	I1002 19:47:02.032151   14172 oci.go:103] Successfully created a docker volume addons-486748
	I1002 19:47:02.032222   14172 cli_runner.go:164] Run: docker run --rm --name addons-486748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-486748 --entrypoint /usr/bin/test -v addons-486748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 19:47:08.841052   14172 cli_runner.go:217] Completed: docker run --rm --name addons-486748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-486748 --entrypoint /usr/bin/test -v addons-486748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (6.808771103s)
	I1002 19:47:08.841097   14172 oci.go:107] Successfully prepared a docker volume addons-486748
	I1002 19:47:08.841125   14172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 19:47:08.841146   14172 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 19:47:08.841196   14172 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-486748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 19:47:13.263298   14172 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-486748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.42206256s)
	I1002 19:47:13.263328   14172 kic.go:203] duration metric: took 4.42217979s to extract preloaded images to volume ...
	W1002 19:47:13.263441   14172 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 19:47:13.263483   14172 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 19:47:13.263519   14172 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 19:47:13.317362   14172 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-486748 --name addons-486748 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-486748 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-486748 --network addons-486748 --ip 192.168.49.2 --volume addons-486748:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 19:47:13.600036   14172 cli_runner.go:164] Run: docker container inspect addons-486748 --format={{.State.Running}}
	I1002 19:47:13.618884   14172 cli_runner.go:164] Run: docker container inspect addons-486748 --format={{.State.Status}}
	I1002 19:47:13.639013   14172 cli_runner.go:164] Run: docker exec addons-486748 stat /var/lib/dpkg/alternatives/iptables
	I1002 19:47:13.683853   14172 oci.go:144] the created container "addons-486748" has a running status.
	I1002 19:47:13.683900   14172 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa...
	I1002 19:47:14.209719   14172 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 19:47:14.236215   14172 cli_runner.go:164] Run: docker container inspect addons-486748 --format={{.State.Status}}
	I1002 19:47:14.255956   14172 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 19:47:14.255981   14172 kic_runner.go:114] Args: [docker exec --privileged addons-486748 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 19:47:14.293510   14172 cli_runner.go:164] Run: docker container inspect addons-486748 --format={{.State.Status}}
	I1002 19:47:14.311968   14172 machine.go:93] provisionDockerMachine start ...
	I1002 19:47:14.312070   14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
	I1002 19:47:14.329017   14172 main.go:141] libmachine: Using SSH client type: native
	I1002 19:47:14.329242   14172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 19:47:14.329256   14172 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 19:47:14.471463   14172 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-486748
	
	I1002 19:47:14.471489   14172 ubuntu.go:182] provisioning hostname "addons-486748"
	I1002 19:47:14.471554   14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
	I1002 19:47:14.488781   14172 main.go:141] libmachine: Using SSH client type: native
	I1002 19:47:14.488984   14172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 19:47:14.488998   14172 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-486748 && echo "addons-486748" | sudo tee /etc/hostname
	I1002 19:47:14.639678   14172 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-486748
	
	I1002 19:47:14.639775   14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
	I1002 19:47:14.657006   14172 main.go:141] libmachine: Using SSH client type: native
	I1002 19:47:14.657273   14172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 19:47:14.657294   14172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-486748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-486748/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-486748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 19:47:14.800181   14172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:47:14.800212   14172 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 19:47:14.800252   14172 ubuntu.go:190] setting up certificates
	I1002 19:47:14.800268   14172 provision.go:84] configureAuth start
	I1002 19:47:14.800322   14172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-486748
	I1002 19:47:14.818158   14172 provision.go:143] copyHostCerts
	I1002 19:47:14.818232   14172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 19:47:14.818341   14172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 19:47:14.818447   14172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 19:47:14.818510   14172 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.addons-486748 san=[127.0.0.1 192.168.49.2 addons-486748 localhost minikube]
	I1002 19:47:14.975696   14172 provision.go:177] copyRemoteCerts
	I1002 19:47:14.975756   14172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 19:47:14.975791   14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
	I1002 19:47:14.992892   14172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa Username:docker}
	I1002 19:47:15.093537   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 19:47:15.112185   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 19:47:15.128814   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 19:47:15.144614   14172 provision.go:87] duration metric: took 344.309849ms to configureAuth
	I1002 19:47:15.144644   14172 ubuntu.go:206] setting minikube options for container-runtime
	I1002 19:47:15.144846   14172 config.go:182] Loaded profile config "addons-486748": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 19:47:15.144947   14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
	I1002 19:47:15.162214   14172 main.go:141] libmachine: Using SSH client type: native
	I1002 19:47:15.162421   14172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 19:47:15.162440   14172 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 19:47:15.418163   14172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 19:47:15.418185   14172 machine.go:96] duration metric: took 1.106195757s to provisionDockerMachine
	I1002 19:47:15.418196   14172 client.go:171] duration metric: took 13.797788888s to LocalClient.Create
	I1002 19:47:15.418212   14172 start.go:168] duration metric: took 13.797855415s to libmachine.API.Create "addons-486748"
	I1002 19:47:15.418219   14172 start.go:294] postStartSetup for "addons-486748" (driver="docker")
	I1002 19:47:15.418228   14172 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 19:47:15.418285   14172 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 19:47:15.418331   14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
	I1002 19:47:15.435548   14172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa Username:docker}
	I1002 19:47:15.538706   14172 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 19:47:15.542216   14172 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 19:47:15.542251   14172 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 19:47:15.542266   14172 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 19:47:15.542334   14172 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 19:47:15.542367   14172 start.go:297] duration metric: took 124.141576ms for postStartSetup
	I1002 19:47:15.542756   14172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-486748
	I1002 19:47:15.560824   14172 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/config.json ...
	I1002 19:47:15.561127   14172 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 19:47:15.561171   14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
	I1002 19:47:15.578232   14172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa Username:docker}
	I1002 19:47:15.676624   14172 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 19:47:15.680897   14172 start.go:129] duration metric: took 14.062740747s to createHost
	I1002 19:47:15.680923   14172 start.go:84] releasing machines lock for "addons-486748", held for 14.062924618s
	I1002 19:47:15.680981   14172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-486748
	I1002 19:47:15.698506   14172 ssh_runner.go:195] Run: cat /version.json
	I1002 19:47:15.698536   14172 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 19:47:15.698561   14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
	I1002 19:47:15.698595   14172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-486748
	I1002 19:47:15.717794   14172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa Username:docker}
	I1002 19:47:15.719465   14172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/addons-486748/id_rsa Username:docker}
	I1002 19:47:15.871074   14172 ssh_runner.go:195] Run: systemctl --version
	I1002 19:47:15.877278   14172 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 19:47:15.911918   14172 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 19:47:15.916385   14172 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 19:47:15.916452   14172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 19:47:15.942087   14172 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 19:47:15.942113   14172 start.go:496] detecting cgroup driver to use...
	I1002 19:47:15.942147   14172 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 19:47:15.942209   14172 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 19:47:15.957607   14172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:47:15.969570   14172 docker.go:218] disabling cri-docker service (if available) ...
	I1002 19:47:15.969623   14172 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 19:47:15.985521   14172 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 19:47:16.002428   14172 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 19:47:16.084633   14172 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 19:47:16.169148   14172 docker.go:234] disabling docker service ...
	I1002 19:47:16.169206   14172 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 19:47:16.187037   14172 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 19:47:16.199516   14172 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 19:47:16.282323   14172 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 19:47:16.361980   14172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 19:47:16.374469   14172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:47:16.388487   14172 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 19:47:16.388541   14172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:16.398812   14172 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 19:47:16.398896   14172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:16.407413   14172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:16.415894   14172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:16.424226   14172 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 19:47:16.432220   14172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:16.440541   14172 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:16.453557   14172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 19:47:16.462034   14172 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 19:47:16.469308   14172 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 19:47:16.469357   14172 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 19:47:16.481016   14172 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 19:47:16.488267   14172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:47:16.564684   14172 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 19:47:16.666081   14172 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 19:47:16.666151   14172 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 19:47:16.670081   14172 start.go:564] Will wait 60s for crictl version
	I1002 19:47:16.670138   14172 ssh_runner.go:195] Run: which crictl
	I1002 19:47:16.673464   14172 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 19:47:16.696583   14172 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 19:47:16.696726   14172 ssh_runner.go:195] Run: crio --version
	I1002 19:47:16.722808   14172 ssh_runner.go:195] Run: crio --version
	I1002 19:47:16.751333   14172 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 19:47:16.752754   14172 cli_runner.go:164] Run: docker network inspect addons-486748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 19:47:16.770947   14172 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 19:47:16.775007   14172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:47:16.785427   14172 kubeadm.go:883] updating cluster {Name:addons-486748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-486748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 19:47:16.785596   14172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 19:47:16.785683   14172 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 19:47:16.817800   14172 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 19:47:16.817820   14172 crio.go:433] Images already preloaded, skipping extraction
	I1002 19:47:16.817869   14172 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 19:47:16.843260   14172 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 19:47:16.843282   14172 cache_images.go:85] Images are preloaded, skipping loading
	I1002 19:47:16.843290   14172 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 19:47:16.843370   14172 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-486748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-486748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 19:47:16.843431   14172 ssh_runner.go:195] Run: crio config
	I1002 19:47:16.886753   14172 cni.go:84] Creating CNI manager for ""
	I1002 19:47:16.886782   14172 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 19:47:16.886800   14172 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 19:47:16.886821   14172 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-486748 NodeName:addons-486748 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 19:47:16.886971   14172 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-486748"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 19:47:16.887039   14172 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 19:47:16.894881   14172 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 19:47:16.894956   14172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 19:47:16.902634   14172 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 19:47:16.914966   14172 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 19:47:16.930275   14172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1002 19:47:16.942556   14172 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 19:47:16.946145   14172 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:47:16.955790   14172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:47:17.027129   14172 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 19:47:17.050908   14172 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748 for IP: 192.168.49.2
	I1002 19:47:17.050932   14172 certs.go:195] generating shared ca certs ...
	I1002 19:47:17.050953   14172 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:17.051078   14172 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 19:47:17.386505   14172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt ...
	I1002 19:47:17.386536   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt: {Name:mk786afdd62ef3a772faf0132a7a1ec7f6ce72dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:17.386725   14172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key ...
	I1002 19:47:17.386744   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key: {Name:mk2d72d3a4f6d4419e21e1fad643fb52f178516c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:17.386825   14172 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 19:47:17.454269   14172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt ...
	I1002 19:47:17.454296   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt: {Name:mk1e303a39d725289fbf8ee759df3fa9d45b3854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:17.454446   14172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key ...
	I1002 19:47:17.454456   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key: {Name:mk3548622bc975a3985c07a4d3c6f05eb739b141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:17.454518   14172 certs.go:257] generating profile certs ...
	I1002 19:47:17.454572   14172 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.key
	I1002 19:47:17.454586   14172 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.crt with IP's: []
	I1002 19:47:17.589435   14172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.crt ...
	I1002 19:47:17.589466   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.crt: {Name:mkda052f537b3a8fe8f52ad21ef111e7ec46e7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:17.589655   14172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.key ...
	I1002 19:47:17.589667   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/client.key: {Name:mk3e2aab61de07ec774bed14a198f947b6c813ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:17.589744   14172 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key.7bbafc29
	I1002 19:47:17.589764   14172 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt.7bbafc29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 19:47:17.885024   14172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt.7bbafc29 ...
	I1002 19:47:17.885054   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt.7bbafc29: {Name:mk7ce1b5544769da61acbaf89af97631724f0bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:17.885215   14172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key.7bbafc29 ...
	I1002 19:47:17.885228   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key.7bbafc29: {Name:mkdc658fe27e2c44d1169b7de754f9a79aa2d243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:17.885293   14172 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt.7bbafc29 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt
	I1002 19:47:17.885368   14172 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key.7bbafc29 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key
	I1002 19:47:17.885415   14172 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.key
	I1002 19:47:17.885429   14172 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.crt with IP's: []
	I1002 19:47:18.275309   14172 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.crt ...
	I1002 19:47:18.275345   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.crt: {Name:mk27b2d4b020fc9e8e22760e08299eb5542b2473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:18.275538   14172 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.key ...
	I1002 19:47:18.275550   14172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.key: {Name:mk79533042f36070d179aa737abedeabdfe5f0e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:47:18.275801   14172 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 19:47:18.275842   14172 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 19:47:18.275869   14172 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 19:47:18.275893   14172 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 19:47:18.276458   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 19:47:18.294136   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 19:47:18.310676   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 19:47:18.327504   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 19:47:18.343634   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 19:47:18.360194   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 19:47:18.376638   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 19:47:18.393694   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/addons-486748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 19:47:18.410259   14172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 19:47:18.428359   14172 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 19:47:18.440533   14172 ssh_runner.go:195] Run: openssl version
	I1002 19:47:18.446551   14172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 19:47:18.457523   14172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:47:18.461348   14172 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:47:18.461397   14172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:47:18.495069   14172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 19:47:18.503632   14172 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 19:47:18.507446   14172 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 19:47:18.507497   14172 kubeadm.go:400] StartCluster: {Name:addons-486748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-486748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:47:18.507559   14172 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 19:47:18.507623   14172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 19:47:18.535082   14172 cri.go:89] found id: ""
	I1002 19:47:18.535161   14172 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 19:47:18.542948   14172 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 19:47:18.550564   14172 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 19:47:18.550631   14172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 19:47:18.557899   14172 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 19:47:18.557917   14172 kubeadm.go:157] found existing configuration files:
	
	I1002 19:47:18.557952   14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 19:47:18.565100   14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 19:47:18.565151   14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 19:47:18.571947   14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 19:47:18.578752   14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 19:47:18.578810   14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 19:47:18.585583   14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 19:47:18.592729   14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 19:47:18.592779   14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 19:47:18.599549   14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 19:47:18.606415   14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 19:47:18.606478   14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 19:47:18.613140   14172 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 19:47:18.681509   14172 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 19:47:18.737422   14172 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 19:51:23.249125   14172 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 19:51:23.249257   14172 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 19:51:23.251457   14172 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 19:51:23.251523   14172 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 19:51:23.251630   14172 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 19:51:23.251738   14172 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 19:51:23.251803   14172 kubeadm.go:318] OS: Linux
	I1002 19:51:23.251843   14172 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 19:51:23.251901   14172 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 19:51:23.251969   14172 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 19:51:23.252035   14172 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 19:51:23.252079   14172 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 19:51:23.252119   14172 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 19:51:23.252174   14172 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 19:51:23.252254   14172 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 19:51:23.252380   14172 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 19:51:23.252560   14172 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 19:51:23.252701   14172 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 19:51:23.252810   14172 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 19:51:23.255322   14172 out.go:252]   - Generating certificates and keys ...
	I1002 19:51:23.255409   14172 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 19:51:23.255519   14172 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 19:51:23.255616   14172 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 19:51:23.255729   14172 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 19:51:23.255813   14172 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 19:51:23.255892   14172 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 19:51:23.255983   14172 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 19:51:23.256123   14172 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 19:51:23.256196   14172 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 19:51:23.256367   14172 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 19:51:23.256467   14172 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 19:51:23.256528   14172 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 19:51:23.256567   14172 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 19:51:23.256716   14172 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 19:51:23.256792   14172 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 19:51:23.256861   14172 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 19:51:23.256940   14172 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 19:51:23.257047   14172 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 19:51:23.257137   14172 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 19:51:23.257245   14172 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 19:51:23.257342   14172 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 19:51:23.259112   14172 out.go:252]   - Booting up control plane ...
	I1002 19:51:23.259225   14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 19:51:23.259350   14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 19:51:23.259432   14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 19:51:23.259514   14172 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 19:51:23.259587   14172 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 19:51:23.259726   14172 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 19:51:23.259837   14172 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 19:51:23.259900   14172 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 19:51:23.260072   14172 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 19:51:23.260213   14172 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 19:51:23.260293   14172 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.755891ms
	I1002 19:51:23.260396   14172 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 19:51:23.260488   14172 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 19:51:23.260607   14172 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 19:51:23.260742   14172 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 19:51:23.260872   14172 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135688s
	I1002 19:51:23.260976   14172 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001214908s
	I1002 19:51:23.261081   14172 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001260227s
	I1002 19:51:23.261090   14172 kubeadm.go:318] 
	I1002 19:51:23.261198   14172 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 19:51:23.261297   14172 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 19:51:23.261410   14172 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 19:51:23.261533   14172 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 19:51:23.261622   14172 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 19:51:23.261726   14172 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 19:51:23.261750   14172 kubeadm.go:318] 
	W1002 19:51:23.261900   14172 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.755891ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135688s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001214908s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001260227s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-486748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.755891ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135688s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001214908s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001260227s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 19:51:23.261986   14172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 19:51:23.703790   14172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:51:23.716020   14172 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 19:51:23.716072   14172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 19:51:23.723743   14172 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 19:51:23.723761   14172 kubeadm.go:157] found existing configuration files:
	
	I1002 19:51:23.723801   14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 19:51:23.731372   14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 19:51:23.731421   14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 19:51:23.738512   14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 19:51:23.746362   14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 19:51:23.746413   14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 19:51:23.753680   14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 19:51:23.760844   14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 19:51:23.760881   14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 19:51:23.767473   14172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 19:51:23.774515   14172 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 19:51:23.774552   14172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 19:51:23.781363   14172 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 19:51:23.815035   14172 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 19:51:23.815109   14172 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 19:51:23.833732   14172 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 19:51:23.833829   14172 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 19:51:23.833880   14172 kubeadm.go:318] OS: Linux
	I1002 19:51:23.833938   14172 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 19:51:23.833989   14172 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 19:51:23.834031   14172 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 19:51:23.834101   14172 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 19:51:23.834186   14172 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 19:51:23.834262   14172 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 19:51:23.834331   14172 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 19:51:23.834404   14172 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 19:51:23.887155   14172 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 19:51:23.887253   14172 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 19:51:23.887375   14172 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 19:51:23.893210   14172 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 19:51:23.896460   14172 out.go:252]   - Generating certificates and keys ...
	I1002 19:51:23.896564   14172 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 19:51:23.896683   14172 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 19:51:23.896766   14172 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 19:51:23.896838   14172 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 19:51:23.896957   14172 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 19:51:23.897044   14172 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 19:51:23.897132   14172 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 19:51:23.897215   14172 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 19:51:23.897293   14172 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 19:51:23.897387   14172 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 19:51:23.897424   14172 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 19:51:23.897469   14172 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 19:51:24.497248   14172 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 19:51:24.717728   14172 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 19:51:24.811928   14172 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 19:51:25.063570   14172 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 19:51:25.151082   14172 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 19:51:25.151462   14172 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 19:51:25.153580   14172 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 19:51:25.155601   14172 out.go:252]   - Booting up control plane ...
	I1002 19:51:25.155713   14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 19:51:25.155841   14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 19:51:25.156725   14172 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 19:51:25.169495   14172 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 19:51:25.169587   14172 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 19:51:25.175662   14172 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 19:51:25.175909   14172 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 19:51:25.175955   14172 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 19:51:25.274141   14172 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 19:51:25.274297   14172 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 19:51:25.775812   14172 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.811567ms
	I1002 19:51:25.778423   14172 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 19:51:25.778548   14172 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 19:51:25.778637   14172 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 19:51:25.778775   14172 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 19:55:25.779313   14172 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
	I1002 19:55:25.779534   14172 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
	I1002 19:55:25.779756   14172 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
	I1002 19:55:25.779832   14172 kubeadm.go:318] 
	I1002 19:55:25.780094   14172 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 19:55:25.780274   14172 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 19:55:25.780428   14172 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 19:55:25.780593   14172 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 19:55:25.780793   14172 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 19:55:25.781023   14172 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 19:55:25.781038   14172 kubeadm.go:318] 
	I1002 19:55:25.783122   14172 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 19:55:25.783256   14172 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 19:55:25.783906   14172 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 19:55:25.784012   14172 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 19:55:25.784103   14172 kubeadm.go:402] duration metric: took 8m7.276606859s to StartCluster
	I1002 19:55:25.784157   14172 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 19:55:25.784220   14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 19:55:25.809866   14172 cri.go:89] found id: ""
	I1002 19:55:25.809904   14172 logs.go:282] 0 containers: []
	W1002 19:55:25.809914   14172 logs.go:284] No container was found matching "kube-apiserver"
	I1002 19:55:25.809924   14172 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 19:55:25.809989   14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 19:55:25.834613   14172 cri.go:89] found id: ""
	I1002 19:55:25.834636   14172 logs.go:282] 0 containers: []
	W1002 19:55:25.834644   14172 logs.go:284] No container was found matching "etcd"
	I1002 19:55:25.834666   14172 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 19:55:25.834719   14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 19:55:25.859621   14172 cri.go:89] found id: ""
	I1002 19:55:25.859642   14172 logs.go:282] 0 containers: []
	W1002 19:55:25.859666   14172 logs.go:284] No container was found matching "coredns"
	I1002 19:55:25.859674   14172 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 19:55:25.859724   14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 19:55:25.884720   14172 cri.go:89] found id: ""
	I1002 19:55:25.884746   14172 logs.go:282] 0 containers: []
	W1002 19:55:25.884756   14172 logs.go:284] No container was found matching "kube-scheduler"
	I1002 19:55:25.884764   14172 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 19:55:25.884811   14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 19:55:25.910002   14172 cri.go:89] found id: ""
	I1002 19:55:25.910022   14172 logs.go:282] 0 containers: []
	W1002 19:55:25.910029   14172 logs.go:284] No container was found matching "kube-proxy"
	I1002 19:55:25.910034   14172 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 19:55:25.910083   14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 19:55:25.935352   14172 cri.go:89] found id: ""
	I1002 19:55:25.935373   14172 logs.go:282] 0 containers: []
	W1002 19:55:25.935381   14172 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 19:55:25.935387   14172 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 19:55:25.935429   14172 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 19:55:25.960341   14172 cri.go:89] found id: ""
	I1002 19:55:25.960364   14172 logs.go:282] 0 containers: []
	W1002 19:55:25.960372   14172 logs.go:284] No container was found matching "kindnet"
	I1002 19:55:25.960381   14172 logs.go:123] Gathering logs for dmesg ...
	I1002 19:55:25.960394   14172 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 19:55:25.971311   14172 logs.go:123] Gathering logs for describe nodes ...
	I1002 19:55:25.971334   14172 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 19:55:26.027119   14172 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 19:55:26.019599    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 19:55:26.020109    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 19:55:26.021739    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 19:55:26.022159    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 19:55:26.024489    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 19:55:26.019599    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 19:55:26.020109    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 19:55:26.021739    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 19:55:26.022159    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 19:55:26.024489    2390 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 19:55:26.027138   14172 logs.go:123] Gathering logs for CRI-O ...
	I1002 19:55:26.027149   14172 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 19:55:26.088066   14172 logs.go:123] Gathering logs for container status ...
	I1002 19:55:26.088099   14172 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 19:55:26.115405   14172 logs.go:123] Gathering logs for kubelet ...
	I1002 19:55:26.115431   14172 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1002 19:55:26.182640   14172 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.811567ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 19:55:26.182714   14172 out.go:285] * 
	* 
	W1002 19:55:26.182773   14172 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.811567ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.811567ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 19:55:26.182787   14172 out.go:285] * 
	* 
	W1002 19:55:26.184528   14172 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 19:55:26.187968   14172 out.go:203] 
	W1002 19:55:26.189180   14172 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.811567ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.811567ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000264431s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000547002s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00067383s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 19:55:26.189206   14172 out.go:285] * 
	* 
	I1002 19:55:26.190431   14172 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-486748 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (517.17s)

                                                
                                    
x
+
TestErrorSpam/setup (498.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-547008 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-547008 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-547008 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-547008 --driver=docker  --container-runtime=crio: exit status 80 (8m18.888971423s)

                                                
                                                
-- stdout --
	* [nospam-547008] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "nospam-547008" primary control-plane node in "nospam-547008" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost nospam-547008] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-547008] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.772273ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000209752s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498773s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000558495s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001884378s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001098192s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001266598s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00121722s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001884378s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001098192s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001266598s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00121722s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-linux-amd64 start -p nospam-547008 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-547008 --driver=docker  --container-runtime=crio" failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-kubelet-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/server\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/server serving cert is signed for DNS names [localhost nospam-547008] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/peer\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-547008] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/healthcheck-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-etcd-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"sa\" key and public key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 501.772273ms"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000209752s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000498773s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000558495s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "X Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.001884378s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.001098192s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.001266598s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.00121722s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.001884378s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.001098192s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.001266598s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.00121722s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:110: minikube stdout:
* [nospam-547008] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21683
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "nospam-547008" primary control-plane node in "nospam-547008" cluster
* Pulling base image v0.0.48-1759382731-21643 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nospam-547008] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-547008] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.772273ms
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000209752s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000498773s
[control-plane-check] kube-scheduler is not healthy after 4m0.000558495s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001884378s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001098192s
[control-plane-check] kube-scheduler is not healthy after 4m0.001266598s
[control-plane-check] kube-controller-manager is not healthy after 4m0.00121722s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001884378s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001098192s
[control-plane-check] kube-scheduler is not healthy after 4m0.001266598s
[control-plane-check] kube-controller-manager is not healthy after 4m0.00121722s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
--- FAIL: TestErrorSpam/setup (498.90s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (497.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753218 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-753218 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: exit status 80 (8m16.197129276s)

                                                
                                                
-- stdout --
	* [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Found network options:
	  - HTTP_PROXY=localhost:46449
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:46449 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-753218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-753218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.792687ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000430961s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000469361s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000504704s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.332089ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000033383s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000195333s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000401588s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.332089ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000033383s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000195333s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000401588s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-amd64 start -p functional-753218 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 6 (292.416039ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:12:13.150750   31841 status.go:458] kubeconfig endpoint: get endpoint: "functional-753218" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-572495                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-572495   │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ delete  │ -p download-only-961266                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-961266   │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ start   │ --download-only -p download-docker-213285 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-213285 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ -p download-docker-213285                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-213285 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ start   │ --download-only -p binary-mirror-331754 --alsologtostderr --binary-mirror http://127.0.0.1:42675 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-331754   │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ -p binary-mirror-331754                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-331754   │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ addons  │ disable dashboard -p addons-486748                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ addons  │ enable dashboard -p addons-486748                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ start   │ -p addons-486748 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ -p addons-486748                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:55 UTC │ 02 Oct 25 19:55 UTC │
	│ start   │ -p nospam-547008 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-547008 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 19:55 UTC │                     │
	│ start   │ nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ delete  │ -p nospam-547008                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ start   │ -p functional-753218 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-753218      │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:03:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:03:56.696051   26859 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:03:56.696132   26859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:03:56.696135   26859 out.go:374] Setting ErrFile to fd 2...
	I1002 20:03:56.696137   26859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:03:56.696352   26859 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:03:56.696882   26859 out.go:368] Setting JSON to false
	I1002 20:03:56.697748   26859 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2786,"bootTime":1759432651,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:03:56.697821   26859 start.go:140] virtualization: kvm guest
	I1002 20:03:56.700191   26859 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:03:56.701416   26859 notify.go:221] Checking for updates...
	I1002 20:03:56.701443   26859 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:03:56.702704   26859 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:03:56.703948   26859 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:03:56.705156   26859 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:03:56.706402   26859 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:03:56.707568   26859 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:03:56.708884   26859 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:03:56.731706   26859 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:03:56.731781   26859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:03:56.783359   26859 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:03:56.774223058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:03:56.783454   26859 docker.go:319] overlay module found
	I1002 20:03:56.785276   26859 out.go:179] * Using the docker driver based on user configuration
	I1002 20:03:56.786636   26859 start.go:306] selected driver: docker
	I1002 20:03:56.786642   26859 start.go:936] validating driver "docker" against <nil>
	I1002 20:03:56.786662   26859 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:03:56.787206   26859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:03:56.840751   26859 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:03:56.831728941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:03:56.840953   26859 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:03:56.841143   26859 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:03:56.842866   26859 out.go:179] * Using Docker driver with root privileges
	I1002 20:03:56.843945   26859 cni.go:84] Creating CNI manager for ""
	I1002 20:03:56.844002   26859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:03:56.844010   26859 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:03:56.844064   26859 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:03:56.845376   26859 out.go:179] * Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	I1002 20:03:56.846375   26859 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:03:56.847963   26859 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:03:56.849026   26859 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:03:56.849054   26859 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:03:56.849061   26859 cache.go:59] Caching tarball of preloaded images
	I1002 20:03:56.849141   26859 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:03:56.849164   26859 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:03:56.849171   26859 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:03:56.849556   26859 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:03:56.849573   26859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json: {Name:mk63b6529d75bfc74a6774a2544dc4a1bc706c16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:03:56.869263   26859 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:03:56.869271   26859 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:03:56.869288   26859 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:03:56.869315   26859 start.go:361] acquireMachinesLock for functional-753218: {Name:mk742badf6f1dbafca8397e398143e758831ae3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:03:56.869417   26859 start.go:365] duration metric: took 87.863µs to acquireMachinesLock for "functional-753218"
	I1002 20:03:56.869442   26859 start.go:94] Provisioning new machine with config: &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:03:56.869508   26859 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:03:56.871536   26859 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1002 20:03:56.871773   26859 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:46449 to docker env.
	I1002 20:03:56.871796   26859 start.go:160] libmachine.API.Create for "functional-753218" (driver="docker")
	I1002 20:03:56.871817   26859 client.go:168] LocalClient.Create starting
	I1002 20:03:56.871869   26859 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:03:56.871910   26859 main.go:141] libmachine: Decoding PEM data...
	I1002 20:03:56.871928   26859 main.go:141] libmachine: Parsing certificate...
	I1002 20:03:56.871972   26859 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:03:56.871986   26859 main.go:141] libmachine: Decoding PEM data...
	I1002 20:03:56.871993   26859 main.go:141] libmachine: Parsing certificate...
	I1002 20:03:56.872299   26859 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:03:56.889461   26859 cli_runner.go:211] docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:03:56.889518   26859 network_create.go:284] running [docker network inspect functional-753218] to gather additional debugging logs...
	I1002 20:03:56.889532   26859 cli_runner.go:164] Run: docker network inspect functional-753218
	W1002 20:03:56.906417   26859 cli_runner.go:211] docker network inspect functional-753218 returned with exit code 1
	I1002 20:03:56.906454   26859 network_create.go:287] error running [docker network inspect functional-753218]: docker network inspect functional-753218: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-753218 not found
	I1002 20:03:56.906470   26859 network_create.go:289] output of [docker network inspect functional-753218]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-753218 not found
	
	** /stderr **
	I1002 20:03:56.906593   26859 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:03:56.923200   26859 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b80f30}
	I1002 20:03:56.923224   26859 network_create.go:124] attempt to create docker network functional-753218 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:03:56.923269   26859 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-753218 functional-753218
	I1002 20:03:56.978455   26859 network_create.go:108] docker network functional-753218 192.168.49.0/24 created
	I1002 20:03:56.978475   26859 kic.go:121] calculated static IP "192.168.49.2" for the "functional-753218" container
	I1002 20:03:56.978548   26859 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:03:56.994944   26859 cli_runner.go:164] Run: docker volume create functional-753218 --label name.minikube.sigs.k8s.io=functional-753218 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:03:57.013043   26859 oci.go:103] Successfully created a docker volume functional-753218
	I1002 20:03:57.013099   26859 cli_runner.go:164] Run: docker run --rm --name functional-753218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-753218 --entrypoint /usr/bin/test -v functional-753218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:03:57.411349   26859 oci.go:107] Successfully prepared a docker volume functional-753218
	I1002 20:03:57.411384   26859 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:03:57.411406   26859 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:03:57.411477   26859 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-753218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:04:01.673943   26859 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-753218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.262430993s)
	I1002 20:04:01.673968   26859 kic.go:203] duration metric: took 4.26255995s to extract preloaded images to volume ...
	W1002 20:04:01.674054   26859 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:04:01.674079   26859 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:04:01.674108   26859 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:04:01.730465   26859 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-753218 --name functional-753218 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-753218 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-753218 --network functional-753218 --ip 192.168.49.2 --volume functional-753218:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:04:01.989007   26859 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Running}}
	I1002 20:04:02.008696   26859 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:04:02.027811   26859 cli_runner.go:164] Run: docker exec functional-753218 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:04:02.078047   26859 oci.go:144] the created container "functional-753218" has a running status.
	I1002 20:04:02.078067   26859 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa...
	I1002 20:04:02.409008   26859 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:04:02.437178   26859 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:04:02.456384   26859 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:04:02.456394   26859 kic_runner.go:114] Args: [docker exec --privileged functional-753218 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:04:02.502602   26859 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:04:02.522447   26859 machine.go:93] provisionDockerMachine start ...
	I1002 20:04:02.522569   26859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:04:02.541758   26859 main.go:141] libmachine: Using SSH client type: native
	I1002 20:04:02.541998   26859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:04:02.542005   26859 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:04:02.688814   26859 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:04:02.688834   26859 ubuntu.go:182] provisioning hostname "functional-753218"
	I1002 20:04:02.688922   26859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:04:02.707361   26859 main.go:141] libmachine: Using SSH client type: native
	I1002 20:04:02.707558   26859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:04:02.707565   26859 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753218 && echo "functional-753218" | sudo tee /etc/hostname
	I1002 20:04:02.859822   26859 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:04:02.859910   26859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:04:02.877673   26859 main.go:141] libmachine: Using SSH client type: native
	I1002 20:04:02.877938   26859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:04:02.877964   26859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753218/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:04:03.021323   26859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:04:03.021343   26859 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:04:03.021357   26859 ubuntu.go:190] setting up certificates
	I1002 20:04:03.021365   26859 provision.go:84] configureAuth start
	I1002 20:04:03.021417   26859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:04:03.038699   26859 provision.go:143] copyHostCerts
	I1002 20:04:03.038749   26859 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:04:03.038756   26859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:04:03.038826   26859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:04:03.038956   26859 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:04:03.038961   26859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:04:03.038989   26859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:04:03.039056   26859 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:04:03.039059   26859 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:04:03.039080   26859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:04:03.039136   26859 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.functional-753218 san=[127.0.0.1 192.168.49.2 functional-753218 localhost minikube]
	I1002 20:04:03.322200   26859 provision.go:177] copyRemoteCerts
	I1002 20:04:03.322252   26859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:04:03.322285   26859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:04:03.340343   26859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:04:03.442066   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:04:03.461464   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:04:03.478192   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:04:03.494806   26859 provision.go:87] duration metric: took 473.422631ms to configureAuth
	I1002 20:04:03.494825   26859 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:04:03.495024   26859 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:04:03.495139   26859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:04:03.512795   26859 main.go:141] libmachine: Using SSH client type: native
	I1002 20:04:03.513015   26859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:04:03.513026   26859 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:04:03.768475   26859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:04:03.768495   26859 machine.go:96] duration metric: took 1.246032626s to provisionDockerMachine
	I1002 20:04:03.768502   26859 client.go:171] duration metric: took 6.896681084s to LocalClient.Create
	I1002 20:04:03.768517   26859 start.go:168] duration metric: took 6.896721131s to libmachine.API.Create "functional-753218"
	I1002 20:04:03.768522   26859 start.go:294] postStartSetup for "functional-753218" (driver="docker")
	I1002 20:04:03.768530   26859 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:04:03.768575   26859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:04:03.768607   26859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:04:03.786536   26859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:04:03.888539   26859 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:04:03.891938   26859 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:04:03.891955   26859 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:04:03.891964   26859 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:04:03.892011   26859 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:04:03.892086   26859 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:04:03.892175   26859 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> hosts in /etc/test/nested/copy/12851
	I1002 20:04:03.892207   26859 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12851
	I1002 20:04:03.899927   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:04:03.919952   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts --> /etc/test/nested/copy/12851/hosts (40 bytes)
	I1002 20:04:03.936672   26859 start.go:297] duration metric: took 168.135612ms for postStartSetup
	I1002 20:04:03.937018   26859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:04:03.954078   26859 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:04:03.954349   26859 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:04:03.954381   26859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:04:03.972539   26859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:04:04.070565   26859 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:04:04.074801   26859 start.go:129] duration metric: took 7.205278788s to createHost
	I1002 20:04:04.074817   26859 start.go:84] releasing machines lock for "functional-753218", held for 7.205392272s
	I1002 20:04:04.074886   26859 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:04:04.093988   26859 out.go:179] * Found network options:
	I1002 20:04:04.095247   26859 out.go:179]   - HTTP_PROXY=localhost:46449
	W1002 20:04:04.096499   26859 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1002 20:04:04.097541   26859 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1002 20:04:04.098856   26859 ssh_runner.go:195] Run: cat /version.json
	I1002 20:04:04.098895   26859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:04:04.098920   26859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:04:04.098969   26859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:04:04.117741   26859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:04:04.118713   26859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:04:04.214864   26859 ssh_runner.go:195] Run: systemctl --version
	I1002 20:04:04.267382   26859 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:04:04.299905   26859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:04:04.304467   26859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:04:04.304522   26859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:04:04.330726   26859 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:04:04.330738   26859 start.go:496] detecting cgroup driver to use...
	I1002 20:04:04.330767   26859 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:04:04.330811   26859 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:04:04.345933   26859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:04:04.358018   26859 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:04:04.358060   26859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:04:04.373794   26859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:04:04.390870   26859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:04:04.466731   26859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:04:04.548966   26859 docker.go:234] disabling docker service ...
	I1002 20:04:04.549019   26859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:04:04.567568   26859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:04:04.579820   26859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:04:04.658641   26859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:04:04.736193   26859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:04:04.748208   26859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:04:04.762192   26859 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:04:04.762240   26859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:04:04.772421   26859 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:04:04.772468   26859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:04:04.781403   26859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:04:04.790209   26859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:04:04.798621   26859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:04:04.806404   26859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:04:04.814845   26859 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:04:04.827962   26859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:04:04.836258   26859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:04:04.843238   26859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:04:04.850267   26859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:04:04.925382   26859 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:04:05.027000   26859 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:04:05.027047   26859 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:04:05.030799   26859 start.go:564] Will wait 60s for crictl version
	I1002 20:04:05.030859   26859 ssh_runner.go:195] Run: which crictl
	I1002 20:04:05.034266   26859 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:04:05.057141   26859 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:04:05.057215   26859 ssh_runner.go:195] Run: crio --version
	I1002 20:04:05.084442   26859 ssh_runner.go:195] Run: crio --version
	I1002 20:04:05.112950   26859 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:04:05.114321   26859 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:04:05.131456   26859 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:04:05.135279   26859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:04:05.145142   26859 kubeadm.go:883] updating cluster {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:04:05.145248   26859 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:04:05.145293   26859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:04:05.177210   26859 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:04:05.177221   26859 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:04:05.177265   26859 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:04:05.202294   26859 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:04:05.202304   26859 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:04:05.202310   26859 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:04:05.202385   26859 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:04:05.202443   26859 ssh_runner.go:195] Run: crio config
	I1002 20:04:05.247852   26859 cni.go:84] Creating CNI manager for ""
	I1002 20:04:05.247864   26859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:04:05.247877   26859 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:04:05.247897   26859 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753218 NodeName:functional-753218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:04:05.248012   26859 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:04:05.248064   26859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:04:05.255936   26859 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:04:05.255991   26859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:04:05.263311   26859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:04:05.275256   26859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:04:05.289768   26859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:04:05.302223   26859 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:04:05.305773   26859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:04:05.315480   26859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:04:05.390062   26859 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:04:05.412596   26859 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218 for IP: 192.168.49.2
	I1002 20:04:05.412610   26859 certs.go:195] generating shared ca certs ...
	I1002 20:04:05.412625   26859 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:04:05.412786   26859 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:04:05.412828   26859 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:04:05.412834   26859 certs.go:257] generating profile certs ...
	I1002 20:04:05.412898   26859 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key
	I1002 20:04:05.412910   26859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt with IP's: []
	I1002 20:04:05.601571   26859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt ...
	I1002 20:04:05.601587   26859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: {Name:mk3b8ab1011e22736596208778d37fcb37d2a589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:04:05.601769   26859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key ...
	I1002 20:04:05.601782   26859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key: {Name:mk7dceb2e6ad6ba05e8f12b0e21d6a69c5d1745c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:04:05.601872   26859 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804
	I1002 20:04:05.601882   26859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt.2c64f804 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:04:05.670706   26859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt.2c64f804 ...
	I1002 20:04:05.670722   26859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt.2c64f804: {Name:mk53d7e00c7936420dfc2bacc88b79500715f02a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:04:05.670876   26859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804 ...
	I1002 20:04:05.670884   26859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804: {Name:mk152dffd21454b5872755d0017dd4405bf28d9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:04:05.670971   26859 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt.2c64f804 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt
	I1002 20:04:05.671036   26859 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key
	I1002 20:04:05.671082   26859 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key
	I1002 20:04:05.671093   26859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt with IP's: []
	I1002 20:04:05.824481   26859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt ...
	I1002 20:04:05.824495   26859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt: {Name:mkb7904c953630ed77b5e70a822da636b6a56d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:04:05.824675   26859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key ...
	I1002 20:04:05.824684   26859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key: {Name:mkd3cc5a8f4224432a0123c5ce31fffc2b980b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:04:05.824906   26859 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:04:05.824939   26859 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:04:05.824945   26859 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:04:05.824966   26859 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:04:05.824991   26859 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:04:05.825011   26859 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:04:05.825043   26859 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:04:05.825537   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:04:05.843095   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:04:05.859925   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:04:05.876693   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:04:05.893694   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:04:05.910784   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:04:05.927771   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:04:05.945386   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:04:05.962456   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:04:05.981199   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:04:05.997870   26859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:04:06.014467   26859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:04:06.026496   26859 ssh_runner.go:195] Run: openssl version
	I1002 20:04:06.032418   26859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:04:06.040880   26859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:04:06.044392   26859 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:04:06.044428   26859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:04:06.078316   26859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:04:06.087119   26859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:04:06.095473   26859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:04:06.099094   26859 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:04:06.099140   26859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:04:06.133594   26859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:04:06.142198   26859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:04:06.150279   26859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:04:06.153826   26859 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:04:06.153896   26859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:04:06.187055   26859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:04:06.195729   26859 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:04:06.199156   26859 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:04:06.199211   26859 kubeadm.go:400] StartCluster: {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:04:06.199271   26859 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:04:06.199306   26859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:04:06.227616   26859 cri.go:89] found id: ""
	I1002 20:04:06.227693   26859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:04:06.236002   26859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:04:06.244457   26859 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:04:06.244500   26859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:04:06.252141   26859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:04:06.252149   26859 kubeadm.go:157] found existing configuration files:
	
	I1002 20:04:06.252190   26859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:04:06.260210   26859 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:04:06.260264   26859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:04:06.267393   26859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:04:06.274582   26859 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:04:06.274627   26859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:04:06.281668   26859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:04:06.289540   26859 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:04:06.289575   26859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:04:06.296595   26859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:04:06.303866   26859 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:04:06.303927   26859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:04:06.311033   26859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:04:06.347366   26859 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:04:06.347437   26859 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:04:06.367095   26859 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:04:06.367183   26859 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:04:06.367227   26859 kubeadm.go:318] OS: Linux
	I1002 20:04:06.367269   26859 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:04:06.367323   26859 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:04:06.367362   26859 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:04:06.367412   26859 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:04:06.367449   26859 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:04:06.367497   26859 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:04:06.367533   26859 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:04:06.367566   26859 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:04:06.423151   26859 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:04:06.423272   26859 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:04:06.423406   26859 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:04:06.429938   26859 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:04:06.432380   26859 out.go:252]   - Generating certificates and keys ...
	I1002 20:04:06.432450   26859 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:04:06.432501   26859 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:04:06.598337   26859 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:04:06.806719   26859 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:04:06.894469   26859 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:04:07.067701   26859 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:04:07.195499   26859 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:04:07.195609   26859 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [functional-753218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:04:07.381948   26859 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:04:07.382061   26859 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [functional-753218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:04:07.536261   26859 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:04:07.980919   26859 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:04:08.370886   26859 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:04:08.370942   26859 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:04:08.810306   26859 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:04:08.898907   26859 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:04:09.122373   26859 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:04:09.315738   26859 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:04:09.387565   26859 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:04:09.388004   26859 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:04:09.391866   26859 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:04:09.396749   26859 out.go:252]   - Booting up control plane ...
	I1002 20:04:09.396836   26859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:04:09.396923   26859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:04:09.396988   26859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:04:09.408395   26859 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:04:09.408511   26859 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:04:09.415056   26859 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:04:09.415300   26859 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:04:09.415355   26859 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:04:09.506026   26859 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:04:09.506203   26859 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:04:10.007714   26859 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.792687ms
	I1002 20:04:10.010527   26859 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:04:10.010635   26859 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:04:10.010728   26859 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:04:10.010793   26859 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:08:10.011683   26859 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000430961s
	I1002 20:08:10.011951   26859 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000469361s
	I1002 20:08:10.012148   26859 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000504704s
	I1002 20:08:10.012159   26859 kubeadm.go:318] 
	I1002 20:08:10.012406   26859 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:08:10.012522   26859 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:08:10.012638   26859 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:08:10.012890   26859 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:08:10.013062   26859 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:08:10.013253   26859 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:08:10.013259   26859 kubeadm.go:318] 
	I1002 20:08:10.015755   26859 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:08:10.015906   26859 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:08:10.016711   26859 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:08:10.016780   26859 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 20:08:10.016928   26859 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-753218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-753218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.792687ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000430961s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000469361s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000504704s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:08:10.017017   26859 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:08:10.455106   26859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:08:10.467314   26859 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:08:10.467358   26859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:08:10.474954   26859 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:08:10.474963   26859 kubeadm.go:157] found existing configuration files:
	
	I1002 20:08:10.475000   26859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:08:10.482481   26859 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:08:10.482525   26859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:08:10.489541   26859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:08:10.496580   26859 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:08:10.496615   26859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:08:10.503534   26859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:08:10.510534   26859 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:08:10.510576   26859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:08:10.517522   26859 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:08:10.524684   26859 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:08:10.524720   26859 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:08:10.531334   26859 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:08:10.584677   26859 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:08:10.640115   26859 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:12:12.424097   26859 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:12:12.424284   26859 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:12:12.426234   26859 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:12:12.426286   26859 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:12:12.426369   26859 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:12:12.426411   26859 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:12:12.426437   26859 kubeadm.go:318] OS: Linux
	I1002 20:12:12.426483   26859 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:12:12.426518   26859 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:12:12.426560   26859 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:12:12.426596   26859 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:12:12.426634   26859 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:12:12.426701   26859 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:12:12.426738   26859 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:12:12.426774   26859 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:12:12.426832   26859 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:12:12.426947   26859 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:12:12.427033   26859 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:12:12.427095   26859 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:12:12.429128   26859 out.go:252]   - Generating certificates and keys ...
	I1002 20:12:12.429189   26859 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:12:12.429246   26859 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:12:12.429308   26859 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:12:12.429356   26859 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:12:12.429416   26859 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:12:12.429460   26859 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:12:12.429526   26859 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:12:12.429597   26859 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:12:12.429722   26859 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:12:12.429814   26859 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:12:12.429857   26859 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:12:12.429929   26859 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:12:12.429996   26859 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:12:12.430054   26859 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:12:12.430095   26859 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:12:12.430143   26859 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:12:12.430205   26859 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:12:12.430290   26859 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:12:12.430346   26859 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:12:12.431739   26859 out.go:252]   - Booting up control plane ...
	I1002 20:12:12.431811   26859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:12:12.431878   26859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:12:12.431937   26859 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:12:12.432024   26859 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:12:12.432111   26859 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:12:12.432214   26859 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:12:12.432286   26859 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:12:12.432322   26859 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:12:12.432430   26859 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:12:12.432509   26859 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:12:12.432554   26859 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.332089ms
	I1002 20:12:12.432628   26859 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:12:12.432701   26859 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:12:12.432778   26859 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:12:12.432850   26859 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:12:12.432911   26859 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000033383s
	I1002 20:12:12.432965   26859 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000195333s
	I1002 20:12:12.433028   26859 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000401588s
	I1002 20:12:12.433032   26859 kubeadm.go:318] 
	I1002 20:12:12.433110   26859 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:12:12.433178   26859 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:12:12.433259   26859 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:12:12.433332   26859 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:12:12.433388   26859 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:12:12.433452   26859 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:12:12.433467   26859 kubeadm.go:318] 
	I1002 20:12:12.433516   26859 kubeadm.go:402] duration metric: took 8m6.234309022s to StartCluster
	I1002 20:12:12.433549   26859 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:12:12.433601   26859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:12:12.457484   26859 cri.go:89] found id: ""
	I1002 20:12:12.457506   26859 logs.go:282] 0 containers: []
	W1002 20:12:12.457512   26859 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:12:12.457519   26859 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:12:12.457567   26859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:12:12.483169   26859 cri.go:89] found id: ""
	I1002 20:12:12.483182   26859 logs.go:282] 0 containers: []
	W1002 20:12:12.483188   26859 logs.go:284] No container was found matching "etcd"
	I1002 20:12:12.483193   26859 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:12:12.483244   26859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:12:12.507933   26859 cri.go:89] found id: ""
	I1002 20:12:12.507946   26859 logs.go:282] 0 containers: []
	W1002 20:12:12.507952   26859 logs.go:284] No container was found matching "coredns"
	I1002 20:12:12.507956   26859 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:12:12.508001   26859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:12:12.532431   26859 cri.go:89] found id: ""
	I1002 20:12:12.532446   26859 logs.go:282] 0 containers: []
	W1002 20:12:12.532453   26859 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:12:12.532457   26859 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:12:12.532509   26859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:12:12.556930   26859 cri.go:89] found id: ""
	I1002 20:12:12.556945   26859 logs.go:282] 0 containers: []
	W1002 20:12:12.556954   26859 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:12:12.556960   26859 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:12:12.557010   26859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:12:12.581691   26859 cri.go:89] found id: ""
	I1002 20:12:12.581708   26859 logs.go:282] 0 containers: []
	W1002 20:12:12.581717   26859 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:12:12.581723   26859 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:12:12.581766   26859 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:12:12.605916   26859 cri.go:89] found id: ""
	I1002 20:12:12.605933   26859 logs.go:282] 0 containers: []
	W1002 20:12:12.605943   26859 logs.go:284] No container was found matching "kindnet"
	I1002 20:12:12.605953   26859 logs.go:123] Gathering logs for kubelet ...
	I1002 20:12:12.605963   26859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:12:12.676554   26859 logs.go:123] Gathering logs for dmesg ...
	I1002 20:12:12.676578   26859 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:12:12.688230   26859 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:12:12.688247   26859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:12:12.745558   26859 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:12:12.739093    2413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:12.739672    2413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:12.741261    2413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:12.741694    2413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:12.742822    2413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:12:12.739093    2413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:12.739672    2413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:12.741261    2413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:12.741694    2413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:12.742822    2413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:12:12.745572   26859 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:12:12.745581   26859 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:12:12.808643   26859 logs.go:123] Gathering logs for container status ...
	I1002 20:12:12.808672   26859 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 20:12:12.836746   26859 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.332089ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000033383s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000195333s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000401588s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:12:12.836784   26859 out.go:285] * 
	W1002 20:12:12.836871   26859 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.332089ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000033383s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000195333s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000401588s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:12:12.836887   26859 out.go:285] * 
	W1002 20:12:12.838806   26859 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:12:12.842032   26859 out.go:203] 
	W1002 20:12:12.843160   26859 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.332089ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000033383s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000195333s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000401588s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:12:12.843182   26859 out.go:285] * 
	I1002 20:12:12.844541   26859 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:12:05 functional-753218 crio[784]: time="2025-10-02T20:12:05.337641325Z" level=info msg="createCtr: removing container bd0e016c8fd5e7131cbc50cc9e7259c6ff508d126402a990a3c3cff28903b76c" id=aaf819ee-29bd-4ca2-a58b-84f487ef5c22 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:05 functional-753218 crio[784]: time="2025-10-02T20:12:05.337688029Z" level=info msg="createCtr: deleting container bd0e016c8fd5e7131cbc50cc9e7259c6ff508d126402a990a3c3cff28903b76c from storage" id=aaf819ee-29bd-4ca2-a58b-84f487ef5c22 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:05 functional-753218 crio[784]: time="2025-10-02T20:12:05.339666668Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753218_kube-system_b932b0024653c86a7ea85a2a83a943a4_0" id=aaf819ee-29bd-4ca2-a58b-84f487ef5c22 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.31353254Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=df71e56b-bb4a-4b66-8c10-aee9f14ce8c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.314380105Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=e95c9169-c3f1-48a0-aeaa-eb5b8101015e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.315256842Z" level=info msg="Creating container: kube-system/etcd-functional-753218/etcd" id=b02dcc7b-0a98-4c54-8e72-919b87619459 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.315455794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.31875764Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.319146681Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.334467403Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b02dcc7b-0a98-4c54-8e72-919b87619459 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.335777816Z" level=info msg="createCtr: deleting container ID 0f94d6ce7bbd4f76540cfdacf7598cbb4c5bace1c048ab227ca47bd2249750af from idIndex" id=b02dcc7b-0a98-4c54-8e72-919b87619459 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.335809584Z" level=info msg="createCtr: removing container 0f94d6ce7bbd4f76540cfdacf7598cbb4c5bace1c048ab227ca47bd2249750af" id=b02dcc7b-0a98-4c54-8e72-919b87619459 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.33583931Z" level=info msg="createCtr: deleting container 0f94d6ce7bbd4f76540cfdacf7598cbb4c5bace1c048ab227ca47bd2249750af from storage" id=b02dcc7b-0a98-4c54-8e72-919b87619459 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:08 functional-753218 crio[784]: time="2025-10-02T20:12:08.337817557Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753218_kube-system_91f2b96cb3e3f380ded17f30e8d873bd_0" id=b02dcc7b-0a98-4c54-8e72-919b87619459 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.312931015Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=8110791b-45dc-4e33-acda-c1b131a0c835 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.313789669Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b4a653d5-1210-4785-974e-c846293233ba name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.314502886Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-753218/kube-scheduler" id=14b83f17-e9d3-4b6a-93d1-eaeb07bc1360 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.314787429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.318019449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.31843222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.335301429Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=14b83f17-e9d3-4b6a-93d1-eaeb07bc1360 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.336582214Z" level=info msg="createCtr: deleting container ID ffdff28bb4de4147f673119748c43a173c7c6dd5b326da6c9634d440beebeaef from idIndex" id=14b83f17-e9d3-4b6a-93d1-eaeb07bc1360 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.336610809Z" level=info msg="createCtr: removing container ffdff28bb4de4147f673119748c43a173c7c6dd5b326da6c9634d440beebeaef" id=14b83f17-e9d3-4b6a-93d1-eaeb07bc1360 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.336675463Z" level=info msg="createCtr: deleting container ffdff28bb4de4147f673119748c43a173c7c6dd5b326da6c9634d440beebeaef from storage" id=14b83f17-e9d3-4b6a-93d1-eaeb07bc1360 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:12:09 functional-753218 crio[784]: time="2025-10-02T20:12:09.338605107Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753218_kube-system_b25a71e49a335bbe853872de1b1e3093_0" id=14b83f17-e9d3-4b6a-93d1-eaeb07bc1360 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:12:13.735214    2563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:13.735786    2563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:13.737355    2563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:13.737838    2563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:12:13.739340    2563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:12:13 up 54 min,  0 user,  load average: 0.11, 0.07, 0.07
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:12:05 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:12:05 functional-753218 kubelet[1799]: E1002 20:12:05.340042    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753218" podUID="b932b0024653c86a7ea85a2a83a943a4"
	Oct 02 20:12:06 functional-753218 kubelet[1799]: E1002 20:12:06.772152    1799 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-753218&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 20:12:07 functional-753218 kubelet[1799]: E1002 20:12:07.660402    1799 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 20:12:08 functional-753218 kubelet[1799]: E1002 20:12:08.313165    1799 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:12:08 functional-753218 kubelet[1799]: E1002 20:12:08.338103    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:12:08 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:12:08 functional-753218 kubelet[1799]:  > podSandboxID="65675f5fefd97e29be9e11728def45d5a2c472bac18f3ca682b57fda50e5abf7"
	Oct 02 20:12:08 functional-753218 kubelet[1799]: E1002 20:12:08.338194    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:12:08 functional-753218 kubelet[1799]:         container etcd start failed in pod etcd-functional-753218_kube-system(91f2b96cb3e3f380ded17f30e8d873bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:12:08 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:12:08 functional-753218 kubelet[1799]: E1002 20:12:08.338222    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753218" podUID="91f2b96cb3e3f380ded17f30e8d873bd"
	Oct 02 20:12:08 functional-753218 kubelet[1799]: E1002 20:12:08.935108    1799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:12:09 functional-753218 kubelet[1799]: I1002 20:12:09.085165    1799 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:12:09 functional-753218 kubelet[1799]: E1002 20:12:09.085561    1799 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:12:09 functional-753218 kubelet[1799]: E1002 20:12:09.312526    1799 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:12:09 functional-753218 kubelet[1799]: E1002 20:12:09.338955    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:12:09 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:12:09 functional-753218 kubelet[1799]:  > podSandboxID="de1cc60186f989d4e0a8994c95a3f2e5173970c97e595ad7db2d469e1551df14"
	Oct 02 20:12:09 functional-753218 kubelet[1799]: E1002 20:12:09.339043    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:12:09 functional-753218 kubelet[1799]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:12:09 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:12:09 functional-753218 kubelet[1799]: E1002 20:12:09.339073    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	Oct 02 20:12:11 functional-753218 kubelet[1799]: E1002 20:12:11.160167    1799 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac570b511d2a5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753218 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:08:12.306453157 +0000 UTC m=+0.389048074,LastTimestamp:2025-10-02 20:08:12.306453157 +0000 UTC m=+0.389048074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753218,}"
	Oct 02 20:12:12 functional-753218 kubelet[1799]: E1002 20:12:12.330598    1799 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753218\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 6 (288.744388ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:12:14.108101   32164 status.go:458] kubeconfig endpoint: get endpoint: "functional-753218" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (497.47s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (368.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 20:12:14.122054   12851 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753218 --alsologtostderr -v=8
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-753218 --alsologtostderr -v=8: exit status 80 (6m6.394988559s)

                                                
                                                
-- stdout --
	* [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:12:14.161053   32280 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:12:14.161314   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:12:14.161324   32280 out.go:374] Setting ErrFile to fd 2...
	I1002 20:12:14.161329   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:12:14.161525   32280 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:12:14.161965   32280 out.go:368] Setting JSON to false
	I1002 20:12:14.162918   32280 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3283,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:12:14.163001   32280 start.go:140] virtualization: kvm guest
	I1002 20:12:14.165258   32280 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:12:14.166596   32280 notify.go:221] Checking for updates...
	I1002 20:12:14.166661   32280 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:12:14.168151   32280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:12:14.169781   32280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:14.170964   32280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:12:14.172159   32280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:12:14.173393   32280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:12:14.175005   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:14.175089   32280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:12:14.198042   32280 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:12:14.198110   32280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:12:14.249812   32280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:12:14.240278836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:12:14.249943   32280 docker.go:319] overlay module found
	I1002 20:12:14.251744   32280 out.go:179] * Using the docker driver based on existing profile
	I1002 20:12:14.252771   32280 start.go:306] selected driver: docker
	I1002 20:12:14.252788   32280 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:14.252894   32280 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:12:14.253012   32280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:12:14.302717   32280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:12:14.29341416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:12:14.303277   32280 cni.go:84] Creating CNI manager for ""
	I1002 20:12:14.303332   32280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:12:14.303374   32280 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:14.305248   32280 out.go:179] * Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	I1002 20:12:14.306703   32280 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:12:14.308110   32280 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:12:14.309231   32280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:12:14.309270   32280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:12:14.309292   32280 cache.go:59] Caching tarball of preloaded images
	I1002 20:12:14.309321   32280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:12:14.309392   32280 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:12:14.309404   32280 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:12:14.309506   32280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:12:14.328595   32280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:12:14.328612   32280 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:12:14.328641   32280 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:12:14.328685   32280 start.go:361] acquireMachinesLock for functional-753218: {Name:mk742badf6f1dbafca8397e398143e758831ae3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:12:14.328749   32280 start.go:365] duration metric: took 40.346µs to acquireMachinesLock for "functional-753218"
	I1002 20:12:14.328768   32280 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:12:14.328773   32280 fix.go:55] fixHost starting: 
	I1002 20:12:14.328978   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:14.345315   32280 fix.go:113] recreateIfNeeded on functional-753218: state=Running err=<nil>
	W1002 20:12:14.345339   32280 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:12:14.347103   32280 out.go:252] * Updating the running docker "functional-753218" container ...
	I1002 20:12:14.347127   32280 machine.go:93] provisionDockerMachine start ...
	I1002 20:12:14.347175   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.364778   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.365056   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.365071   32280 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:12:14.506481   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:12:14.506514   32280 ubuntu.go:182] provisioning hostname "functional-753218"
	I1002 20:12:14.506576   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.523646   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.523886   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.523904   32280 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753218 && echo "functional-753218" | sudo tee /etc/hostname
	I1002 20:12:14.674327   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:12:14.674412   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.691957   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.692191   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.692210   32280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753218/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:12:14.834109   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:12:14.834144   32280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:12:14.834205   32280 ubuntu.go:190] setting up certificates
	I1002 20:12:14.834219   32280 provision.go:84] configureAuth start
	I1002 20:12:14.834287   32280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:12:14.852021   32280 provision.go:143] copyHostCerts
	I1002 20:12:14.852056   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:12:14.852091   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:12:14.852111   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:12:14.852184   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:12:14.852289   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:12:14.852315   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:12:14.852322   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:12:14.852367   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:12:14.852431   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:12:14.852454   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:12:14.852460   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:12:14.852497   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:12:14.852565   32280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.functional-753218 san=[127.0.0.1 192.168.49.2 functional-753218 localhost minikube]
	I1002 20:12:14.908205   32280 provision.go:177] copyRemoteCerts
	I1002 20:12:14.908265   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:12:14.908316   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.925225   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.025356   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:12:15.025415   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:12:15.042012   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:12:15.042068   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:12:15.059080   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:12:15.059140   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:12:15.075501   32280 provision.go:87] duration metric: took 241.264617ms to configureAuth
	I1002 20:12:15.075530   32280 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:12:15.075723   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:15.075835   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.092499   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:15.092718   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:15.092740   32280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:12:15.350871   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:12:15.350899   32280 machine.go:96] duration metric: took 1.003764785s to provisionDockerMachine
	I1002 20:12:15.350913   32280 start.go:294] postStartSetup for "functional-753218" (driver="docker")
	I1002 20:12:15.350926   32280 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:12:15.350976   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:12:15.351010   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.368192   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.468976   32280 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:12:15.472512   32280 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 20:12:15.472527   32280 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 20:12:15.472540   32280 command_runner.go:130] > VERSION_ID="12"
	I1002 20:12:15.472545   32280 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 20:12:15.472553   32280 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 20:12:15.472556   32280 command_runner.go:130] > ID=debian
	I1002 20:12:15.472560   32280 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 20:12:15.472565   32280 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 20:12:15.472572   32280 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 20:12:15.472618   32280 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:12:15.472635   32280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:12:15.472666   32280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:12:15.472731   32280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:12:15.472806   32280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:12:15.472815   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:12:15.472889   32280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> hosts in /etc/test/nested/copy/12851
	I1002 20:12:15.472896   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> /etc/test/nested/copy/12851/hosts
	I1002 20:12:15.472925   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12851
	I1002 20:12:15.480384   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:12:15.496865   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts --> /etc/test/nested/copy/12851/hosts (40 bytes)
	I1002 20:12:15.513635   32280 start.go:297] duration metric: took 162.708522ms for postStartSetup
	I1002 20:12:15.513745   32280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:12:15.513794   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.530644   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.628445   32280 command_runner.go:130] > 39%
	I1002 20:12:15.628745   32280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:12:15.633076   32280 command_runner.go:130] > 179G
	I1002 20:12:15.633306   32280 fix.go:57] duration metric: took 1.304525715s for fixHost
	I1002 20:12:15.633325   32280 start.go:84] releasing machines lock for "functional-753218", held for 1.30456494s
	I1002 20:12:15.633398   32280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:12:15.650579   32280 ssh_runner.go:195] Run: cat /version.json
	I1002 20:12:15.650618   32280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:12:15.650631   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.650688   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.668938   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.669107   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.765770   32280 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 20:12:15.817112   32280 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 20:12:15.819166   32280 ssh_runner.go:195] Run: systemctl --version
	I1002 20:12:15.825335   32280 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 20:12:15.825364   32280 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 20:12:15.825559   32280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:12:15.861701   32280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 20:12:15.866192   32280 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 20:12:15.866262   32280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:12:15.866323   32280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:12:15.874084   32280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:12:15.874106   32280 start.go:496] detecting cgroup driver to use...
	I1002 20:12:15.874141   32280 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:12:15.874206   32280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:12:15.887803   32280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:12:15.899530   32280 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:12:15.899588   32280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:12:15.913378   32280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:12:15.925494   32280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:12:16.013036   32280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:12:16.099049   32280 docker.go:234] disabling docker service ...
	I1002 20:12:16.099135   32280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:12:16.112698   32280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:12:16.124592   32280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:12:16.212924   32280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:12:16.298302   32280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:12:16.310529   32280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:12:16.324186   32280 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 20:12:16.324212   32280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:12:16.324248   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.332999   32280 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:12:16.333067   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.341758   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.350162   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.358406   32280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:12:16.365887   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.374465   32280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.382513   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.390861   32280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:12:16.397800   32280 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 20:12:16.397864   32280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:12:16.404831   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:16.487603   32280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:12:19.404809   32280 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.917172928s)
	I1002 20:12:19.404840   32280 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:12:19.404889   32280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:12:19.408896   32280 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 20:12:19.408924   32280 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 20:12:19.408935   32280 command_runner.go:130] > Device: 0,59	Inode: 3843        Links: 1
	I1002 20:12:19.408947   32280 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:12:19.408956   32280 command_runner.go:130] > Access: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408964   32280 command_runner.go:130] > Modify: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408977   32280 command_runner.go:130] > Change: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408989   32280 command_runner.go:130] >  Birth: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.409044   32280 start.go:564] Will wait 60s for crictl version
	I1002 20:12:19.409092   32280 ssh_runner.go:195] Run: which crictl
	I1002 20:12:19.412689   32280 command_runner.go:130] > /usr/local/bin/crictl
	I1002 20:12:19.412744   32280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:12:19.436957   32280 command_runner.go:130] > Version:  0.1.0
	I1002 20:12:19.436979   32280 command_runner.go:130] > RuntimeName:  cri-o
	I1002 20:12:19.436984   32280 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 20:12:19.436989   32280 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 20:12:19.437005   32280 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:12:19.437072   32280 ssh_runner.go:195] Run: crio --version
	I1002 20:12:19.464211   32280 command_runner.go:130] > crio version 1.34.1
	I1002 20:12:19.464228   32280 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:12:19.464234   32280 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:12:19.464240   32280 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:12:19.464244   32280 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:12:19.464248   32280 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:12:19.464252   32280 command_runner.go:130] >    Compiler:       gc
	I1002 20:12:19.464257   32280 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:12:19.464261   32280 command_runner.go:130] >    Linkmode:       static
	I1002 20:12:19.464264   32280 command_runner.go:130] >    BuildTags:
	I1002 20:12:19.464267   32280 command_runner.go:130] >      static
	I1002 20:12:19.464275   32280 command_runner.go:130] >      netgo
	I1002 20:12:19.464279   32280 command_runner.go:130] >      osusergo
	I1002 20:12:19.464283   32280 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:12:19.464288   32280 command_runner.go:130] >      seccomp
	I1002 20:12:19.464291   32280 command_runner.go:130] >      apparmor
	I1002 20:12:19.464298   32280 command_runner.go:130] >      selinux
	I1002 20:12:19.464302   32280 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:12:19.464306   32280 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:12:19.464310   32280 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:12:19.464385   32280 ssh_runner.go:195] Run: crio --version
	I1002 20:12:19.491564   32280 command_runner.go:130] > crio version 1.34.1
	I1002 20:12:19.491590   32280 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:12:19.491596   32280 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:12:19.491601   32280 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:12:19.491605   32280 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:12:19.491609   32280 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:12:19.491612   32280 command_runner.go:130] >    Compiler:       gc
	I1002 20:12:19.491619   32280 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:12:19.491623   32280 command_runner.go:130] >    Linkmode:       static
	I1002 20:12:19.491627   32280 command_runner.go:130] >    BuildTags:
	I1002 20:12:19.491630   32280 command_runner.go:130] >      static
	I1002 20:12:19.491634   32280 command_runner.go:130] >      netgo
	I1002 20:12:19.491637   32280 command_runner.go:130] >      osusergo
	I1002 20:12:19.491641   32280 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:12:19.491665   32280 command_runner.go:130] >      seccomp
	I1002 20:12:19.491671   32280 command_runner.go:130] >      apparmor
	I1002 20:12:19.491681   32280 command_runner.go:130] >      selinux
	I1002 20:12:19.491687   32280 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:12:19.491700   32280 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:12:19.491719   32280 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:12:19.493718   32280 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:12:19.495253   32280 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:12:19.512253   32280 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:12:19.516262   32280 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 20:12:19.516341   32280 kubeadm.go:883] updating cluster {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:12:19.516485   32280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:12:19.516543   32280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:12:19.546693   32280 command_runner.go:130] > {
	I1002 20:12:19.546715   32280 command_runner.go:130] >   "images":  [
	I1002 20:12:19.546721   32280 command_runner.go:130] >     {
	I1002 20:12:19.546728   32280 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:12:19.546732   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.546739   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:12:19.546745   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546774   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.546794   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:12:19.546808   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:12:19.546815   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546819   32280 command_runner.go:130] >       "size":  "109379124",
	I1002 20:12:19.546826   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.546835   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.546843   32280 command_runner.go:130] >     },
	I1002 20:12:19.546850   32280 command_runner.go:130] >     {
	I1002 20:12:19.546862   32280 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:12:19.546873   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.546881   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:12:19.546890   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546896   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.546909   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:12:19.546920   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:12:19.546937   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546947   32280 command_runner.go:130] >       "size":  "31470524",
	I1002 20:12:19.546954   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.546966   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.546972   32280 command_runner.go:130] >     },
	I1002 20:12:19.546979   32280 command_runner.go:130] >     {
	I1002 20:12:19.546989   32280 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:12:19.547010   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547022   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:12:19.547032   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547039   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547053   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:12:19.547065   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:12:19.547073   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547080   32280 command_runner.go:130] >       "size":  "76103547",
	I1002 20:12:19.547087   32280 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:12:19.547091   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547094   32280 command_runner.go:130] >     },
	I1002 20:12:19.547100   32280 command_runner.go:130] >     {
	I1002 20:12:19.547113   32280 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:12:19.547119   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547129   32280 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:12:19.547135   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547144   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547154   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:12:19.547167   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:12:19.547176   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547182   32280 command_runner.go:130] >       "size":  "195976448",
	I1002 20:12:19.547187   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547192   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547201   32280 command_runner.go:130] >       },
	I1002 20:12:19.547217   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547228   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547233   32280 command_runner.go:130] >     },
	I1002 20:12:19.547242   32280 command_runner.go:130] >     {
	I1002 20:12:19.547252   32280 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:12:19.547261   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547269   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:12:19.547276   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547281   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547301   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:12:19.547316   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:12:19.547321   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547331   32280 command_runner.go:130] >       "size":  "89046001",
	I1002 20:12:19.547337   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547346   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547352   32280 command_runner.go:130] >       },
	I1002 20:12:19.547361   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547368   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547376   32280 command_runner.go:130] >     },
	I1002 20:12:19.547380   32280 command_runner.go:130] >     {
	I1002 20:12:19.547390   32280 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:12:19.547396   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547407   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:12:19.547413   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547423   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547435   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:12:19.547451   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:12:19.547459   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547466   32280 command_runner.go:130] >       "size":  "76004181",
	I1002 20:12:19.547474   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547480   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547489   32280 command_runner.go:130] >       },
	I1002 20:12:19.547495   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547507   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547512   32280 command_runner.go:130] >     },
	I1002 20:12:19.547517   32280 command_runner.go:130] >     {
	I1002 20:12:19.547527   32280 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:12:19.547534   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547541   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:12:19.547546   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547552   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547561   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:12:19.547582   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:12:19.547592   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547599   32280 command_runner.go:130] >       "size":  "73138073",
	I1002 20:12:19.547606   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547615   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547624   32280 command_runner.go:130] >     },
	I1002 20:12:19.547629   32280 command_runner.go:130] >     {
	I1002 20:12:19.547641   32280 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:12:19.547658   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547667   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:12:19.547673   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547683   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547693   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:12:19.547720   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:12:19.547729   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547733   32280 command_runner.go:130] >       "size":  "53844823",
	I1002 20:12:19.547737   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547743   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547752   32280 command_runner.go:130] >       },
	I1002 20:12:19.547758   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547768   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547775   32280 command_runner.go:130] >     },
	I1002 20:12:19.547782   32280 command_runner.go:130] >     {
	I1002 20:12:19.547794   32280 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:12:19.547804   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547814   32280 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.547820   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547825   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547839   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:12:19.547853   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:12:19.547861   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547867   32280 command_runner.go:130] >       "size":  "742092",
	I1002 20:12:19.547876   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547887   32280 command_runner.go:130] >         "value":  "65535"
	I1002 20:12:19.547894   32280 command_runner.go:130] >       },
	I1002 20:12:19.547900   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547906   32280 command_runner.go:130] >       "pinned":  true
	I1002 20:12:19.547910   32280 command_runner.go:130] >     }
	I1002 20:12:19.547917   32280 command_runner.go:130] >   ]
	I1002 20:12:19.547924   32280 command_runner.go:130] > }
	I1002 20:12:19.548472   32280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:12:19.548485   32280 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:12:19.548524   32280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:12:19.570809   32280 command_runner.go:130] > {
	I1002 20:12:19.570828   32280 command_runner.go:130] >   "images":  [
	I1002 20:12:19.570831   32280 command_runner.go:130] >     {
	I1002 20:12:19.570839   32280 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:12:19.570844   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.570849   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:12:19.570853   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570857   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.570864   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:12:19.570871   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:12:19.570877   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570882   32280 command_runner.go:130] >       "size":  "109379124",
	I1002 20:12:19.570889   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.570902   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.570908   32280 command_runner.go:130] >     },
	I1002 20:12:19.570914   32280 command_runner.go:130] >     {
	I1002 20:12:19.570922   32280 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:12:19.570928   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.570932   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:12:19.570938   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570941   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.570948   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:12:19.570958   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:12:19.570964   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570971   32280 command_runner.go:130] >       "size":  "31470524",
	I1002 20:12:19.570976   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.570985   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.570990   32280 command_runner.go:130] >     },
	I1002 20:12:19.570993   32280 command_runner.go:130] >     {
	I1002 20:12:19.571001   32280 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:12:19.571005   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571012   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:12:19.571016   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571021   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571028   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:12:19.571037   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:12:19.571043   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571047   32280 command_runner.go:130] >       "size":  "76103547",
	I1002 20:12:19.571050   32280 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:12:19.571056   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571059   32280 command_runner.go:130] >     },
	I1002 20:12:19.571065   32280 command_runner.go:130] >     {
	I1002 20:12:19.571071   32280 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:12:19.571077   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571081   32280 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:12:19.571087   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571091   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571099   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:12:19.571108   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:12:19.571113   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571117   32280 command_runner.go:130] >       "size":  "195976448",
	I1002 20:12:19.571122   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571126   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571132   32280 command_runner.go:130] >       },
	I1002 20:12:19.571139   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571145   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571152   32280 command_runner.go:130] >     },
	I1002 20:12:19.571157   32280 command_runner.go:130] >     {
	I1002 20:12:19.571163   32280 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:12:19.571169   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571173   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:12:19.571179   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571183   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571192   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:12:19.571201   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:12:19.571207   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571211   32280 command_runner.go:130] >       "size":  "89046001",
	I1002 20:12:19.571216   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571220   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571226   32280 command_runner.go:130] >       },
	I1002 20:12:19.571231   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571234   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571237   32280 command_runner.go:130] >     },
	I1002 20:12:19.571242   32280 command_runner.go:130] >     {
	I1002 20:12:19.571249   32280 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:12:19.571255   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571260   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:12:19.571265   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571269   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571276   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:12:19.571286   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:12:19.571292   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571295   32280 command_runner.go:130] >       "size":  "76004181",
	I1002 20:12:19.571301   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571305   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571310   32280 command_runner.go:130] >       },
	I1002 20:12:19.571314   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571318   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571323   32280 command_runner.go:130] >     },
	I1002 20:12:19.571327   32280 command_runner.go:130] >     {
	I1002 20:12:19.571335   32280 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:12:19.571339   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571349   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:12:19.571355   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571359   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571367   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:12:19.571376   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:12:19.571382   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571386   32280 command_runner.go:130] >       "size":  "73138073",
	I1002 20:12:19.571393   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571397   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571402   32280 command_runner.go:130] >     },
	I1002 20:12:19.571405   32280 command_runner.go:130] >     {
	I1002 20:12:19.571410   32280 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:12:19.571414   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571418   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:12:19.571422   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571425   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571431   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:12:19.571446   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:12:19.571455   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571461   32280 command_runner.go:130] >       "size":  "53844823",
	I1002 20:12:19.571469   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571474   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571482   32280 command_runner.go:130] >       },
	I1002 20:12:19.571488   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571495   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571498   32280 command_runner.go:130] >     },
	I1002 20:12:19.571504   32280 command_runner.go:130] >     {
	I1002 20:12:19.571510   32280 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:12:19.571516   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571520   32280 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.571526   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571530   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571542   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:12:19.571552   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:12:19.571556   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571562   32280 command_runner.go:130] >       "size":  "742092",
	I1002 20:12:19.571565   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571571   32280 command_runner.go:130] >         "value":  "65535"
	I1002 20:12:19.571575   32280 command_runner.go:130] >       },
	I1002 20:12:19.571581   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571585   32280 command_runner.go:130] >       "pinned":  true
	I1002 20:12:19.571590   32280 command_runner.go:130] >     }
	I1002 20:12:19.571593   32280 command_runner.go:130] >   ]
	I1002 20:12:19.571598   32280 command_runner.go:130] > }
	I1002 20:12:19.572597   32280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:12:19.572614   32280 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:12:19.572621   32280 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:12:19.572734   32280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:12:19.572796   32280 ssh_runner.go:195] Run: crio config
	I1002 20:12:19.612615   32280 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 20:12:19.612638   32280 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 20:12:19.612664   32280 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 20:12:19.612669   32280 command_runner.go:130] > #
	I1002 20:12:19.612689   32280 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 20:12:19.612698   32280 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 20:12:19.612709   32280 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 20:12:19.612721   32280 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 20:12:19.612728   32280 command_runner.go:130] > # reload'.
	I1002 20:12:19.612738   32280 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 20:12:19.612748   32280 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 20:12:19.612758   32280 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 20:12:19.612768   32280 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 20:12:19.612773   32280 command_runner.go:130] > [crio]
	I1002 20:12:19.612785   32280 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 20:12:19.612796   32280 command_runner.go:130] > # containers images, in this directory.
	I1002 20:12:19.612808   32280 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 20:12:19.612821   32280 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 20:12:19.612828   32280 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 20:12:19.612841   32280 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 20:12:19.612855   32280 command_runner.go:130] > # imagestore = ""
	I1002 20:12:19.612864   32280 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 20:12:19.612878   32280 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 20:12:19.612885   32280 command_runner.go:130] > # storage_driver = "overlay"
	I1002 20:12:19.612895   32280 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 20:12:19.612905   32280 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 20:12:19.612914   32280 command_runner.go:130] > # storage_option = [
	I1002 20:12:19.612917   32280 command_runner.go:130] > # ]
	I1002 20:12:19.612923   32280 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 20:12:19.612931   32280 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 20:12:19.612941   32280 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 20:12:19.612950   32280 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 20:12:19.612959   32280 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 20:12:19.612970   32280 command_runner.go:130] > # always happen on a node reboot
	I1002 20:12:19.612977   32280 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 20:12:19.612994   32280 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 20:12:19.613004   32280 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 20:12:19.613009   32280 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 20:12:19.613016   32280 command_runner.go:130] > # version_file_persist = ""
	I1002 20:12:19.613025   32280 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 20:12:19.613033   32280 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 20:12:19.613041   32280 command_runner.go:130] > # internal_wipe = true
	I1002 20:12:19.613054   32280 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 20:12:19.613066   32280 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 20:12:19.613075   32280 command_runner.go:130] > # internal_repair = true
	I1002 20:12:19.613083   32280 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 20:12:19.613095   32280 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 20:12:19.613113   32280 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 20:12:19.613120   32280 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 20:12:19.613129   32280 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 20:12:19.613134   32280 command_runner.go:130] > [crio.api]
	I1002 20:12:19.613142   32280 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 20:12:19.613150   32280 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 20:12:19.613162   32280 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 20:12:19.613173   32280 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 20:12:19.613185   32280 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 20:12:19.613197   32280 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 20:12:19.613204   32280 command_runner.go:130] > # stream_port = "0"
	I1002 20:12:19.613213   32280 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 20:12:19.613222   32280 command_runner.go:130] > # stream_enable_tls = false
	I1002 20:12:19.613231   32280 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 20:12:19.613238   32280 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 20:12:19.613248   32280 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 20:12:19.613260   32280 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 20:12:19.613266   32280 command_runner.go:130] > # stream_tls_cert = ""
	I1002 20:12:19.613274   32280 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 20:12:19.613292   32280 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 20:12:19.613301   32280 command_runner.go:130] > # stream_tls_key = ""
	I1002 20:12:19.613309   32280 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 20:12:19.613323   32280 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 20:12:19.613331   32280 command_runner.go:130] > # automatically pick up the changes.
	I1002 20:12:19.613340   32280 command_runner.go:130] > # stream_tls_ca = ""
	I1002 20:12:19.613394   32280 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:12:19.613408   32280 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 20:12:19.613420   32280 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:12:19.613430   32280 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 20:12:19.613440   32280 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 20:12:19.613452   32280 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 20:12:19.613458   32280 command_runner.go:130] > [crio.runtime]
	I1002 20:12:19.613469   32280 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 20:12:19.613481   32280 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 20:12:19.613487   32280 command_runner.go:130] > # "nofile=1024:2048"
	I1002 20:12:19.613500   32280 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 20:12:19.613508   32280 command_runner.go:130] > # default_ulimits = [
	I1002 20:12:19.613514   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613526   32280 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 20:12:19.613532   32280 command_runner.go:130] > # no_pivot = false
	I1002 20:12:19.613543   32280 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 20:12:19.613554   32280 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 20:12:19.613564   32280 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 20:12:19.613573   32280 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 20:12:19.613584   32280 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 20:12:19.613594   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:12:19.613603   32280 command_runner.go:130] > # conmon = ""
	I1002 20:12:19.613611   32280 command_runner.go:130] > # Cgroup setting for conmon
	I1002 20:12:19.613625   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 20:12:19.613632   32280 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 20:12:19.613642   32280 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 20:12:19.613664   32280 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 20:12:19.613682   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:12:19.613692   32280 command_runner.go:130] > # conmon_env = [
	I1002 20:12:19.613698   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613710   32280 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 20:12:19.613720   32280 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 20:12:19.613729   32280 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 20:12:19.613739   32280 command_runner.go:130] > # default_env = [
	I1002 20:12:19.613746   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613758   32280 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 20:12:19.613769   32280 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 20:12:19.613778   32280 command_runner.go:130] > # selinux = false
	I1002 20:12:19.613788   32280 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 20:12:19.613803   32280 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 20:12:19.613814   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613822   32280 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:12:19.613835   32280 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 20:12:19.613846   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613852   32280 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 20:12:19.613865   32280 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 20:12:19.613878   32280 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 20:12:19.613890   32280 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 20:12:19.613899   32280 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 20:12:19.613908   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613917   32280 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 20:12:19.613926   32280 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 20:12:19.613937   32280 command_runner.go:130] > # the cgroup blockio controller.
	I1002 20:12:19.613944   32280 command_runner.go:130] > # blockio_config_file = ""
	I1002 20:12:19.613958   32280 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 20:12:19.613965   32280 command_runner.go:130] > # blockio parameters.
	I1002 20:12:19.613974   32280 command_runner.go:130] > # blockio_reload = false
	I1002 20:12:19.613983   32280 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 20:12:19.613994   32280 command_runner.go:130] > # irqbalance daemon.
	I1002 20:12:19.614002   32280 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 20:12:19.614013   32280 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 20:12:19.614023   32280 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 20:12:19.614037   32280 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 20:12:19.614048   32280 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 20:12:19.614061   32280 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 20:12:19.614068   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.614077   32280 command_runner.go:130] > # rdt_config_file = ""
	I1002 20:12:19.614085   32280 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 20:12:19.614095   32280 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 20:12:19.614104   32280 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 20:12:19.614113   32280 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 20:12:19.614127   32280 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 20:12:19.614139   32280 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 20:12:19.614147   32280 command_runner.go:130] > # will be added.
	I1002 20:12:19.614155   32280 command_runner.go:130] > # default_capabilities = [
	I1002 20:12:19.614163   32280 command_runner.go:130] > # 	"CHOWN",
	I1002 20:12:19.614170   32280 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 20:12:19.614177   32280 command_runner.go:130] > # 	"FSETID",
	I1002 20:12:19.614182   32280 command_runner.go:130] > # 	"FOWNER",
	I1002 20:12:19.614187   32280 command_runner.go:130] > # 	"SETGID",
	I1002 20:12:19.614210   32280 command_runner.go:130] > # 	"SETUID",
	I1002 20:12:19.614214   32280 command_runner.go:130] > # 	"SETPCAP",
	I1002 20:12:19.614219   32280 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 20:12:19.614223   32280 command_runner.go:130] > # 	"KILL",
	I1002 20:12:19.614227   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614236   32280 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 20:12:19.614243   32280 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 20:12:19.614248   32280 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 20:12:19.614256   32280 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 20:12:19.614265   32280 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:12:19.614271   32280 command_runner.go:130] > default_sysctls = [
	I1002 20:12:19.614279   32280 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 20:12:19.614284   32280 command_runner.go:130] > ]
	I1002 20:12:19.614291   32280 command_runner.go:130] > # List of devices on the host that a
	I1002 20:12:19.614299   32280 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 20:12:19.614308   32280 command_runner.go:130] > # allowed_devices = [
	I1002 20:12:19.614313   32280 command_runner.go:130] > # 	"/dev/fuse",
	I1002 20:12:19.614321   32280 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 20:12:19.614327   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614335   32280 command_runner.go:130] > # List of additional devices. specified as
	I1002 20:12:19.614349   32280 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 20:12:19.614359   32280 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 20:12:19.614368   32280 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:12:19.614376   32280 command_runner.go:130] > # additional_devices = [
	I1002 20:12:19.614381   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614388   32280 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 20:12:19.614394   32280 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 20:12:19.614398   32280 command_runner.go:130] > # 	"/etc/cdi",
	I1002 20:12:19.614402   32280 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 20:12:19.614404   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614410   32280 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 20:12:19.614416   32280 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 20:12:19.614420   32280 command_runner.go:130] > # Defaults to false.
	I1002 20:12:19.614424   32280 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 20:12:19.614432   32280 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 20:12:19.614438   32280 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 20:12:19.614441   32280 command_runner.go:130] > # hooks_dir = [
	I1002 20:12:19.614445   32280 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 20:12:19.614449   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614454   32280 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 20:12:19.614462   32280 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 20:12:19.614467   32280 command_runner.go:130] > # its default mounts from the following two files:
	I1002 20:12:19.614471   32280 command_runner.go:130] > #
	I1002 20:12:19.614476   32280 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 20:12:19.614484   32280 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 20:12:19.614489   32280 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 20:12:19.614494   32280 command_runner.go:130] > #
	I1002 20:12:19.614500   32280 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 20:12:19.614506   32280 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 20:12:19.614514   32280 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 20:12:19.614519   32280 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 20:12:19.614524   32280 command_runner.go:130] > #
	I1002 20:12:19.614528   32280 command_runner.go:130] > # default_mounts_file = ""
	I1002 20:12:19.614532   32280 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 20:12:19.614539   32280 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 20:12:19.614545   32280 command_runner.go:130] > # pids_limit = -1
	I1002 20:12:19.614551   32280 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 20:12:19.614559   32280 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 20:12:19.614564   32280 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 20:12:19.614572   32280 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 20:12:19.614578   32280 command_runner.go:130] > # log_size_max = -1
	I1002 20:12:19.614716   32280 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 20:12:19.614727   32280 command_runner.go:130] > # log_to_journald = false
	I1002 20:12:19.614733   32280 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 20:12:19.614738   32280 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 20:12:19.614745   32280 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 20:12:19.614750   32280 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 20:12:19.614757   32280 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 20:12:19.614761   32280 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 20:12:19.614766   32280 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 20:12:19.614772   32280 command_runner.go:130] > # read_only = false
	I1002 20:12:19.614777   32280 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 20:12:19.614785   32280 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 20:12:19.614789   32280 command_runner.go:130] > # live configuration reload.
	I1002 20:12:19.614795   32280 command_runner.go:130] > # log_level = "info"
	I1002 20:12:19.614800   32280 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 20:12:19.614807   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.614811   32280 command_runner.go:130] > # log_filter = ""
	I1002 20:12:19.614817   32280 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 20:12:19.614825   32280 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 20:12:19.614829   32280 command_runner.go:130] > # separated by comma.
	I1002 20:12:19.614839   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614846   32280 command_runner.go:130] > # uid_mappings = ""
	I1002 20:12:19.614851   32280 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 20:12:19.614859   32280 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 20:12:19.614863   32280 command_runner.go:130] > # separated by comma.
	I1002 20:12:19.614873   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614877   32280 command_runner.go:130] > # gid_mappings = ""
	I1002 20:12:19.614884   32280 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 20:12:19.614890   32280 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:12:19.614898   32280 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:12:19.614905   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614909   32280 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 20:12:19.614916   32280 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 20:12:19.614924   32280 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:12:19.614931   32280 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:12:19.614940   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614944   32280 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 20:12:19.614949   32280 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 20:12:19.614959   32280 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 20:12:19.614964   32280 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 20:12:19.614970   32280 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 20:12:19.614975   32280 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 20:12:19.614983   32280 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 20:12:19.614988   32280 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 20:12:19.614993   32280 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 20:12:19.614999   32280 command_runner.go:130] > # drop_infra_ctr = true
	I1002 20:12:19.615004   32280 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 20:12:19.615009   32280 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 20:12:19.615018   32280 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 20:12:19.615024   32280 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 20:12:19.615031   32280 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 20:12:19.615038   32280 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 20:12:19.615044   32280 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 20:12:19.615052   32280 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 20:12:19.615055   32280 command_runner.go:130] > # shared_cpuset = ""
	I1002 20:12:19.615063   32280 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 20:12:19.615068   32280 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 20:12:19.615073   32280 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 20:12:19.615080   32280 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 20:12:19.615086   32280 command_runner.go:130] > # pinns_path = ""
	I1002 20:12:19.615090   32280 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 20:12:19.615098   32280 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 20:12:19.615102   32280 command_runner.go:130] > # enable_criu_support = true
	I1002 20:12:19.615111   32280 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 20:12:19.615116   32280 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 20:12:19.615123   32280 command_runner.go:130] > # enable_pod_events = false
	I1002 20:12:19.615128   32280 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 20:12:19.615135   32280 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 20:12:19.615139   32280 command_runner.go:130] > # default_runtime = "crun"
	I1002 20:12:19.615146   32280 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 20:12:19.615152   32280 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 20:12:19.615161   32280 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 20:12:19.615168   32280 command_runner.go:130] > # creation as a file is not desired either.
	I1002 20:12:19.615175   32280 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 20:12:19.615182   32280 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 20:12:19.615187   32280 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 20:12:19.615190   32280 command_runner.go:130] > # ]
	I1002 20:12:19.615195   32280 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 20:12:19.615201   32280 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 20:12:19.615207   32280 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 20:12:19.615214   32280 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 20:12:19.615216   32280 command_runner.go:130] > #
	I1002 20:12:19.615221   32280 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 20:12:19.615227   32280 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 20:12:19.615231   32280 command_runner.go:130] > # runtime_type = "oci"
	I1002 20:12:19.615237   32280 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 20:12:19.615241   32280 command_runner.go:130] > # inherit_default_runtime = false
	I1002 20:12:19.615246   32280 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 20:12:19.615252   32280 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 20:12:19.615256   32280 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 20:12:19.615262   32280 command_runner.go:130] > # monitor_env = []
	I1002 20:12:19.615266   32280 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 20:12:19.615270   32280 command_runner.go:130] > # allowed_annotations = []
	I1002 20:12:19.615278   32280 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 20:12:19.615282   32280 command_runner.go:130] > # no_sync_log = false
	I1002 20:12:19.615288   32280 command_runner.go:130] > # default_annotations = {}
	I1002 20:12:19.615293   32280 command_runner.go:130] > # stream_websockets = false
	I1002 20:12:19.615299   32280 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:12:19.615333   32280 command_runner.go:130] > # Where:
	I1002 20:12:19.615343   32280 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 20:12:19.615349   32280 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 20:12:19.615354   32280 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 20:12:19.615363   32280 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 20:12:19.615366   32280 command_runner.go:130] > #   in $PATH.
	I1002 20:12:19.615375   32280 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 20:12:19.615380   32280 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 20:12:19.615387   32280 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 20:12:19.615391   32280 command_runner.go:130] > #   state.
	I1002 20:12:19.615400   32280 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 20:12:19.615413   32280 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 20:12:19.615421   32280 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 20:12:19.615428   32280 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 20:12:19.615435   32280 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 20:12:19.615441   32280 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 20:12:19.615446   32280 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 20:12:19.615452   32280 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 20:12:19.615458   32280 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 20:12:19.615465   32280 command_runner.go:130] > #   The currently recognized values are:
	I1002 20:12:19.615470   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 20:12:19.615479   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 20:12:19.615485   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 20:12:19.615490   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 20:12:19.615499   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 20:12:19.615505   32280 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 20:12:19.615514   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 20:12:19.615521   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 20:12:19.615529   32280 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 20:12:19.615534   32280 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 20:12:19.615541   32280 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 20:12:19.615549   32280 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 20:12:19.615555   32280 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 20:12:19.615564   32280 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 20:12:19.615569   32280 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 20:12:19.615579   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 20:12:19.615586   32280 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 20:12:19.615589   32280 command_runner.go:130] > #   deprecated option "conmon".
	I1002 20:12:19.615596   32280 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 20:12:19.615601   32280 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 20:12:19.615607   32280 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 20:12:19.615614   32280 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 20:12:19.615621   32280 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 20:12:19.615628   32280 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 20:12:19.615634   32280 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 20:12:19.615638   32280 command_runner.go:130] > #   conmon-rs by using:
	I1002 20:12:19.615656   32280 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 20:12:19.615668   32280 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 20:12:19.615682   32280 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 20:12:19.615690   32280 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 20:12:19.615695   32280 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 20:12:19.615704   32280 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 20:12:19.615712   32280 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 20:12:19.615720   32280 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 20:12:19.615731   32280 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 20:12:19.615747   32280 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 20:12:19.615756   32280 command_runner.go:130] > #   when a machine crash happens.
	I1002 20:12:19.615765   32280 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 20:12:19.615774   32280 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 20:12:19.615784   32280 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 20:12:19.615788   32280 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 20:12:19.615797   32280 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 20:12:19.615804   32280 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 20:12:19.615810   32280 command_runner.go:130] > #
	I1002 20:12:19.615818   32280 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 20:12:19.615826   32280 command_runner.go:130] > #
	I1002 20:12:19.615838   32280 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 20:12:19.615850   32280 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 20:12:19.615854   32280 command_runner.go:130] > #
	I1002 20:12:19.615860   32280 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 20:12:19.615868   32280 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 20:12:19.615871   32280 command_runner.go:130] > #
	I1002 20:12:19.615880   32280 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 20:12:19.615889   32280 command_runner.go:130] > # feature.
	I1002 20:12:19.615894   32280 command_runner.go:130] > #
	I1002 20:12:19.615906   32280 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 20:12:19.615918   32280 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 20:12:19.615931   32280 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 20:12:19.615943   32280 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 20:12:19.615954   32280 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 20:12:19.615957   32280 command_runner.go:130] > #
	I1002 20:12:19.615964   32280 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 20:12:19.615972   32280 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 20:12:19.615977   32280 command_runner.go:130] > #
	I1002 20:12:19.615989   32280 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 20:12:19.616001   32280 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 20:12:19.616010   32280 command_runner.go:130] > #
	I1002 20:12:19.616019   32280 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 20:12:19.616031   32280 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 20:12:19.616039   32280 command_runner.go:130] > # limitation.
	I1002 20:12:19.616045   32280 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 20:12:19.616054   32280 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 20:12:19.616058   32280 command_runner.go:130] > runtime_type = ""
	I1002 20:12:19.616063   32280 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 20:12:19.616073   32280 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:12:19.616082   32280 command_runner.go:130] > runtime_config_path = ""
	I1002 20:12:19.616091   32280 command_runner.go:130] > container_min_memory = ""
	I1002 20:12:19.616098   32280 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:12:19.616107   32280 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:12:19.616115   32280 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:12:19.616124   32280 command_runner.go:130] > allowed_annotations = [
	I1002 20:12:19.616131   32280 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 20:12:19.616137   32280 command_runner.go:130] > ]
	I1002 20:12:19.616141   32280 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:12:19.616146   32280 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 20:12:19.616157   32280 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 20:12:19.616163   32280 command_runner.go:130] > runtime_type = ""
	I1002 20:12:19.616173   32280 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 20:12:19.616180   32280 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:12:19.616189   32280 command_runner.go:130] > runtime_config_path = ""
	I1002 20:12:19.616196   32280 command_runner.go:130] > container_min_memory = ""
	I1002 20:12:19.616206   32280 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:12:19.616215   32280 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:12:19.616221   32280 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:12:19.616228   32280 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:12:19.616238   32280 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 20:12:19.616247   32280 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 20:12:19.616258   32280 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 20:12:19.616272   32280 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 20:12:19.616289   32280 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 20:12:19.616305   32280 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 20:12:19.616314   32280 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 20:12:19.616323   32280 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 20:12:19.616340   32280 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 20:12:19.616353   32280 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 20:12:19.616366   32280 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 20:12:19.616380   32280 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 20:12:19.616387   32280 command_runner.go:130] > # Example:
	I1002 20:12:19.616393   32280 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 20:12:19.616401   32280 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 20:12:19.616408   32280 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 20:12:19.616420   32280 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 20:12:19.616430   32280 command_runner.go:130] > # cpuset = "0-1"
	I1002 20:12:19.616435   32280 command_runner.go:130] > # cpushares = "5"
	I1002 20:12:19.616442   32280 command_runner.go:130] > # cpuquota = "1000"
	I1002 20:12:19.616451   32280 command_runner.go:130] > # cpuperiod = "100000"
	I1002 20:12:19.616457   32280 command_runner.go:130] > # cpulimit = "35"
	I1002 20:12:19.616466   32280 command_runner.go:130] > # Where:
	I1002 20:12:19.616473   32280 command_runner.go:130] > # The workload name is workload-type.
	I1002 20:12:19.616483   32280 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 20:12:19.616489   32280 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 20:12:19.616502   32280 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 20:12:19.616516   32280 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 20:12:19.616528   32280 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 20:12:19.616541   32280 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 20:12:19.616551   32280 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 20:12:19.616560   32280 command_runner.go:130] > # Default value is set to true
	I1002 20:12:19.616566   32280 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 20:12:19.616574   32280 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 20:12:19.616582   32280 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 20:12:19.616592   32280 command_runner.go:130] > # Default value is set to 'false'
	I1002 20:12:19.616601   32280 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 20:12:19.616612   32280 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 20:12:19.616624   32280 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 20:12:19.616632   32280 command_runner.go:130] > # timezone = ""
	I1002 20:12:19.616642   32280 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 20:12:19.616658   32280 command_runner.go:130] > #
	I1002 20:12:19.616667   32280 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 20:12:19.616686   32280 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 20:12:19.616695   32280 command_runner.go:130] > [crio.image]
	I1002 20:12:19.616703   32280 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 20:12:19.616714   32280 command_runner.go:130] > # default_transport = "docker://"
	I1002 20:12:19.616725   32280 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 20:12:19.616732   32280 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:12:19.616739   32280 command_runner.go:130] > # global_auth_file = ""
	I1002 20:12:19.616751   32280 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 20:12:19.616762   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.616771   32280 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.616783   32280 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 20:12:19.616795   32280 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:12:19.616804   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.616811   32280 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 20:12:19.616817   32280 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 20:12:19.616825   32280 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 20:12:19.616830   32280 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 20:12:19.616837   32280 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 20:12:19.616842   32280 command_runner.go:130] > # pause_command = "/pause"
	I1002 20:12:19.616852   32280 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 20:12:19.616864   32280 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 20:12:19.616877   32280 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 20:12:19.616889   32280 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 20:12:19.616899   32280 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 20:12:19.616911   32280 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 20:12:19.616918   32280 command_runner.go:130] > # pinned_images = [
	I1002 20:12:19.616921   32280 command_runner.go:130] > # ]
	I1002 20:12:19.616928   32280 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 20:12:19.616937   32280 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 20:12:19.616942   32280 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 20:12:19.616947   32280 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 20:12:19.616955   32280 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 20:12:19.616959   32280 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 20:12:19.616965   32280 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 20:12:19.616973   32280 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 20:12:19.616979   32280 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 20:12:19.616988   32280 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 20:12:19.616997   32280 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 20:12:19.617009   32280 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 20:12:19.617020   32280 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 20:12:19.617036   32280 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 20:12:19.617044   32280 command_runner.go:130] > # changing them here.
	I1002 20:12:19.617053   32280 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 20:12:19.617062   32280 command_runner.go:130] > # insecure_registries = [
	I1002 20:12:19.617066   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617073   32280 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 20:12:19.617078   32280 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 20:12:19.617084   32280 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 20:12:19.617089   32280 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 20:12:19.617095   32280 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 20:12:19.617101   32280 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 20:12:19.617107   32280 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 20:12:19.617111   32280 command_runner.go:130] > # auto_reload_registries = false
	I1002 20:12:19.617117   32280 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 20:12:19.617127   32280 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 20:12:19.617135   32280 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 20:12:19.617138   32280 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 20:12:19.617143   32280 command_runner.go:130] > # The mode of short name resolution.
	I1002 20:12:19.617149   32280 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 20:12:19.617158   32280 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 20:12:19.617163   32280 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 20:12:19.617169   32280 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 20:12:19.617175   32280 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 20:12:19.617182   32280 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 20:12:19.617186   32280 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 20:12:19.617192   32280 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 20:12:19.617197   32280 command_runner.go:130] > # CNI plugins.
	I1002 20:12:19.617200   32280 command_runner.go:130] > [crio.network]
	I1002 20:12:19.617206   32280 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 20:12:19.617212   32280 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 20:12:19.617219   32280 command_runner.go:130] > # cni_default_network = ""
	I1002 20:12:19.617231   32280 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 20:12:19.617240   32280 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 20:12:19.617246   32280 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 20:12:19.617250   32280 command_runner.go:130] > # plugin_dirs = [
	I1002 20:12:19.617254   32280 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 20:12:19.617256   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617261   32280 command_runner.go:130] > # List of included pod metrics.
	I1002 20:12:19.617266   32280 command_runner.go:130] > # included_pod_metrics = [
	I1002 20:12:19.617269   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617274   32280 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 20:12:19.617279   32280 command_runner.go:130] > [crio.metrics]
	I1002 20:12:19.617284   32280 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 20:12:19.617290   32280 command_runner.go:130] > # enable_metrics = false
	I1002 20:12:19.617294   32280 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 20:12:19.617298   32280 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 20:12:19.617306   32280 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 20:12:19.617312   32280 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 20:12:19.617320   32280 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 20:12:19.617323   32280 command_runner.go:130] > # metrics_collectors = [
	I1002 20:12:19.617327   32280 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 20:12:19.617331   32280 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 20:12:19.617334   32280 command_runner.go:130] > # 	"containers_oom_total",
	I1002 20:12:19.617338   32280 command_runner.go:130] > # 	"processes_defunct",
	I1002 20:12:19.617341   32280 command_runner.go:130] > # 	"operations_total",
	I1002 20:12:19.617345   32280 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 20:12:19.617348   32280 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 20:12:19.617352   32280 command_runner.go:130] > # 	"operations_errors_total",
	I1002 20:12:19.617355   32280 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 20:12:19.617359   32280 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 20:12:19.617363   32280 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 20:12:19.617367   32280 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 20:12:19.617371   32280 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 20:12:19.617375   32280 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 20:12:19.617379   32280 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 20:12:19.617383   32280 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 20:12:19.617388   32280 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 20:12:19.617391   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617397   32280 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 20:12:19.617403   32280 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 20:12:19.617407   32280 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 20:12:19.617411   32280 command_runner.go:130] > # metrics_port = 9090
	I1002 20:12:19.617415   32280 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 20:12:19.617419   32280 command_runner.go:130] > # metrics_socket = ""
	I1002 20:12:19.617423   32280 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 20:12:19.617429   32280 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 20:12:19.617437   32280 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 20:12:19.617441   32280 command_runner.go:130] > # certificate on any modification event.
	I1002 20:12:19.617447   32280 command_runner.go:130] > # metrics_cert = ""
	I1002 20:12:19.617452   32280 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 20:12:19.617456   32280 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 20:12:19.617460   32280 command_runner.go:130] > # metrics_key = ""
	I1002 20:12:19.617465   32280 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 20:12:19.617471   32280 command_runner.go:130] > [crio.tracing]
	I1002 20:12:19.617476   32280 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 20:12:19.617482   32280 command_runner.go:130] > # enable_tracing = false
	I1002 20:12:19.617488   32280 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 20:12:19.617494   32280 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 20:12:19.617500   32280 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 20:12:19.617506   32280 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 20:12:19.617511   32280 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 20:12:19.617514   32280 command_runner.go:130] > [crio.nri]
	I1002 20:12:19.617518   32280 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 20:12:19.617524   32280 command_runner.go:130] > # enable_nri = true
	I1002 20:12:19.617527   32280 command_runner.go:130] > # NRI socket to listen on.
	I1002 20:12:19.617533   32280 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 20:12:19.617539   32280 command_runner.go:130] > # NRI plugin directory to use.
	I1002 20:12:19.617543   32280 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 20:12:19.617547   32280 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 20:12:19.617552   32280 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 20:12:19.617560   32280 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 20:12:19.617591   32280 command_runner.go:130] > # nri_disable_connections = false
	I1002 20:12:19.617598   32280 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 20:12:19.617604   32280 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 20:12:19.617612   32280 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 20:12:19.617623   32280 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 20:12:19.617630   32280 command_runner.go:130] > # NRI default validator configuration.
	I1002 20:12:19.617637   32280 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 20:12:19.617645   32280 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 20:12:19.617661   32280 command_runner.go:130] > # can be restricted/rejected:
	I1002 20:12:19.617671   32280 command_runner.go:130] > # - OCI hook injection
	I1002 20:12:19.617683   32280 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 20:12:19.617691   32280 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 20:12:19.617696   32280 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 20:12:19.617702   32280 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 20:12:19.617708   32280 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 20:12:19.617715   32280 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 20:12:19.617720   32280 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 20:12:19.617722   32280 command_runner.go:130] > #
	I1002 20:12:19.617726   32280 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 20:12:19.617733   32280 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 20:12:19.617737   32280 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 20:12:19.617743   32280 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 20:12:19.617750   32280 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 20:12:19.617755   32280 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 20:12:19.617759   32280 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 20:12:19.617764   32280 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 20:12:19.617767   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617771   32280 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 20:12:19.617779   32280 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 20:12:19.617782   32280 command_runner.go:130] > [crio.stats]
	I1002 20:12:19.617787   32280 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 20:12:19.617796   32280 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 20:12:19.617800   32280 command_runner.go:130] > # stats_collection_period = 0
	I1002 20:12:19.617807   32280 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 20:12:19.617812   32280 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 20:12:19.617819   32280 command_runner.go:130] > # collection_period = 0
	I1002 20:12:19.617847   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597735388Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 20:12:19.617857   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597762161Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 20:12:19.617879   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597788561Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 20:12:19.617891   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597814431Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 20:12:19.617901   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597905829Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:19.617910   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.59812179Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 20:12:19.617937   32280 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 20:12:19.618034   32280 cni.go:84] Creating CNI manager for ""
	I1002 20:12:19.618045   32280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:12:19.618055   32280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:12:19.618074   32280 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753218 NodeName:functional-753218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:12:19.618185   32280 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:12:19.618237   32280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:12:19.625483   32280 command_runner.go:130] > kubeadm
	I1002 20:12:19.625499   32280 command_runner.go:130] > kubectl
	I1002 20:12:19.625503   32280 command_runner.go:130] > kubelet
	I1002 20:12:19.626080   32280 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:12:19.626131   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:12:19.633273   32280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:12:19.644695   32280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:12:19.656113   32280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:12:19.667414   32280 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:12:19.670740   32280 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 20:12:19.670794   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:19.752159   32280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:12:19.764280   32280 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218 for IP: 192.168.49.2
	I1002 20:12:19.764303   32280 certs.go:195] generating shared ca certs ...
	I1002 20:12:19.764324   32280 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:19.764461   32280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:12:19.764507   32280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:12:19.764516   32280 certs.go:257] generating profile certs ...
	I1002 20:12:19.764596   32280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key
	I1002 20:12:19.764641   32280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804
	I1002 20:12:19.764700   32280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key
	I1002 20:12:19.764711   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:12:19.764723   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:12:19.764735   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:12:19.764749   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:12:19.764761   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:12:19.764773   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:12:19.764785   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:12:19.764797   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:12:19.764840   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:12:19.764868   32280 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:12:19.764878   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:12:19.764907   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:12:19.764932   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:12:19.764953   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:12:19.764991   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:12:19.765016   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:19.765029   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.765042   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:12:19.765474   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:12:19.782548   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:12:19.799734   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:12:19.816390   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:12:19.832589   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:12:19.848700   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:12:19.864849   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:12:19.880775   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:12:19.896846   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:12:19.913614   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:12:19.929578   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:12:19.945677   32280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:12:19.957745   32280 ssh_runner.go:195] Run: openssl version
	I1002 20:12:19.963258   32280 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 20:12:19.963501   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:12:19.971695   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975234   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975257   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975294   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:12:20.009021   32280 command_runner.go:130] > 51391683
	I1002 20:12:20.009100   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:12:20.016966   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:12:20.025422   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029194   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029238   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029282   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.064218   32280 command_runner.go:130] > 3ec20f2e
	I1002 20:12:20.064321   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:12:20.072502   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:12:20.080739   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084507   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084542   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084576   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.118973   32280 command_runner.go:130] > b5213941
	I1002 20:12:20.119045   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:12:20.127219   32280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:12:20.130733   32280 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:12:20.130756   32280 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 20:12:20.130765   32280 command_runner.go:130] > Device: 8,1	Inode: 579408      Links: 1
	I1002 20:12:20.130774   32280 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:12:20.130783   32280 command_runner.go:130] > Access: 2025-10-02 20:08:10.644972655 +0000
	I1002 20:12:20.130793   32280 command_runner.go:130] > Modify: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130799   32280 command_runner.go:130] > Change: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130806   32280 command_runner.go:130] >  Birth: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130872   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:12:20.164340   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.164601   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:12:20.199434   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.199512   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:12:20.233489   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.233589   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:12:20.266980   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.267235   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:12:20.300792   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.301105   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:12:20.334621   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.334895   32280 kubeadm.go:400] StartCluster: {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:20.334978   32280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:12:20.335040   32280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:12:20.362233   32280 cri.go:89] found id: ""
	I1002 20:12:20.362287   32280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:12:20.370000   32280 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 20:12:20.370022   32280 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 20:12:20.370028   32280 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 20:12:20.370045   32280 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:12:20.370050   32280 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:12:20.370092   32280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:12:20.377231   32280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:12:20.377306   32280 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753218" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.377343   32280 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753218" cluster setting kubeconfig missing "functional-753218" context setting]
	I1002 20:12:20.377618   32280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.379016   32280 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.379143   32280 kapi.go:59] client config for functional-753218: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:12:20.379525   32280 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:12:20.379543   32280 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:12:20.379548   32280 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:12:20.379552   32280 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:12:20.379556   32280 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:12:20.379580   32280 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:12:20.379896   32280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:12:20.387047   32280 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:12:20.387086   32280 kubeadm.go:601] duration metric: took 17.030465ms to restartPrimaryControlPlane
	I1002 20:12:20.387097   32280 kubeadm.go:402] duration metric: took 52.210982ms to StartCluster
	I1002 20:12:20.387113   32280 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.387221   32280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.387762   32280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.387978   32280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:12:20.388069   32280 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:12:20.388123   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:20.388170   32280 addons.go:69] Setting storage-provisioner=true in profile "functional-753218"
	I1002 20:12:20.388189   32280 addons.go:238] Setting addon storage-provisioner=true in "functional-753218"
	I1002 20:12:20.388224   32280 host.go:66] Checking if "functional-753218" exists ...
	I1002 20:12:20.388188   32280 addons.go:69] Setting default-storageclass=true in profile "functional-753218"
	I1002 20:12:20.388303   32280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753218"
	I1002 20:12:20.388534   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.388593   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.390858   32280 out.go:179] * Verifying Kubernetes components...
	I1002 20:12:20.392041   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:20.408831   32280 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.409013   32280 kapi.go:59] client config for functional-753218: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:12:20.409334   32280 addons.go:238] Setting addon default-storageclass=true in "functional-753218"
	I1002 20:12:20.409372   32280 host.go:66] Checking if "functional-753218" exists ...
	I1002 20:12:20.409857   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.409921   32280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:12:20.411389   32280 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:20.411408   32280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:12:20.411451   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:20.434249   32280 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.434269   32280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:12:20.434323   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:20.437366   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:20.453124   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:20.491163   32280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:12:20.504681   32280 node_ready.go:35] waiting up to 6m0s for node "functional-753218" to be "Ready" ...
	I1002 20:12:20.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:12:20.504901   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:20.505187   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:20.544925   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:20.560749   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.598254   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.598305   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.598334   32280 retry.go:31] will retry after 360.790251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.611750   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.611829   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.611854   32280 retry.go:31] will retry after 210.270105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.822270   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.872283   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.874485   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.874514   32280 retry.go:31] will retry after 244.966298ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.959846   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.005341   32280 type.go:168] "Request Body" body=""
	I1002 20:12:21.005421   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:21.005781   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:21.012418   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.012451   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.012466   32280 retry.go:31] will retry after 409.292121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.119728   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:21.168429   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.170739   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.170771   32280 retry.go:31] will retry after 294.217693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.422106   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.465688   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:21.470239   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.472502   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.472537   32280 retry.go:31] will retry after 332.995728ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.505685   32280 type.go:168] "Request Body" body=""
	I1002 20:12:21.505778   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:21.506123   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:21.516911   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.516971   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.516996   32280 retry.go:31] will retry after 954.810325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.806393   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.857573   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.857614   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.857637   32280 retry.go:31] will retry after 1.033500231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.004877   32280 type.go:168] "Request Body" body=""
	I1002 20:12:22.004976   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:22.005310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:22.472906   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:22.505435   32280 type.go:168] "Request Body" body=""
	I1002 20:12:22.505517   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:22.505893   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:22.505957   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:22.524411   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:22.524454   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.524474   32280 retry.go:31] will retry after 931.915639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.892005   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:22.942851   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:22.942928   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.942955   32280 retry.go:31] will retry after 1.834952264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.004927   32280 type.go:168] "Request Body" body=""
	I1002 20:12:23.005007   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:23.005354   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:23.456821   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:23.505094   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.505147   32280 type.go:168] "Request Body" body=""
	I1002 20:12:23.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:23.505484   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:23.507597   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.507626   32280 retry.go:31] will retry after 2.313716894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:24.005157   32280 type.go:168] "Request Body" body=""
	I1002 20:12:24.005267   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:24.005594   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:24.505508   32280 type.go:168] "Request Body" body=""
	I1002 20:12:24.505632   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:24.506012   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:24.506092   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:24.778419   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:24.830315   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:24.830361   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:24.830382   32280 retry.go:31] will retry after 2.530323246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:25.005736   32280 type.go:168] "Request Body" body=""
	I1002 20:12:25.005808   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:25.006117   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:25.504853   32280 type.go:168] "Request Body" body=""
	I1002 20:12:25.504920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:25.505273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:25.821714   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:25.872812   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:25.872859   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:25.872881   32280 retry.go:31] will retry after 1.957365536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:26.005078   32280 type.go:168] "Request Body" body=""
	I1002 20:12:26.005153   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:26.005503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:26.505250   32280 type.go:168] "Request Body" body=""
	I1002 20:12:26.505323   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:26.505711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:27.005530   32280 type.go:168] "Request Body" body=""
	I1002 20:12:27.005599   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:27.005959   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:27.006023   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:27.361473   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:27.411520   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:27.413776   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.413807   32280 retry.go:31] will retry after 3.768585845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.504922   32280 type.go:168] "Request Body" body=""
	I1002 20:12:27.505019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:27.505351   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:27.830904   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:27.880071   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:27.882324   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.882350   32280 retry.go:31] will retry after 2.676983733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:28.005719   32280 type.go:168] "Request Body" body=""
	I1002 20:12:28.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:28.006101   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:28.504826   32280 type.go:168] "Request Body" body=""
	I1002 20:12:28.504909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:28.505226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:29.004968   32280 type.go:168] "Request Body" body=""
	I1002 20:12:29.005052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:29.005361   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:29.505178   32280 type.go:168] "Request Body" body=""
	I1002 20:12:29.505270   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:29.505576   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:29.505628   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:30.005335   32280 type.go:168] "Request Body" body=""
	I1002 20:12:30.005400   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:30.005747   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:30.505557   32280 type.go:168] "Request Body" body=""
	I1002 20:12:30.505643   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:30.505971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:30.560186   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:30.610807   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:30.610870   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:30.610892   32280 retry.go:31] will retry after 7.973230912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.005274   32280 type.go:168] "Request Body" body=""
	I1002 20:12:31.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:31.005789   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:31.182990   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:31.231953   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:31.234462   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.234491   32280 retry.go:31] will retry after 5.687657455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.504842   32280 type.go:168] "Request Body" body=""
	I1002 20:12:31.504956   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:31.505254   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:32.005885   32280 type.go:168] "Request Body" body=""
	I1002 20:12:32.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:32.006262   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:32.006314   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:32.504840   32280 type.go:168] "Request Body" body=""
	I1002 20:12:32.504910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:32.505216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:33.005827   32280 type.go:168] "Request Body" body=""
	I1002 20:12:33.005903   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:33.006210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:33.505861   32280 type.go:168] "Request Body" body=""
	I1002 20:12:33.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:33.506234   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:34.005834   32280 type.go:168] "Request Body" body=""
	I1002 20:12:34.005939   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:34.006292   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:34.006347   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:34.505067   32280 type.go:168] "Request Body" body=""
	I1002 20:12:34.505178   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:34.505476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:35.005027   32280 type.go:168] "Request Body" body=""
	I1002 20:12:35.005102   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:35.005423   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:35.504956   32280 type.go:168] "Request Body" body=""
	I1002 20:12:35.505018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:35.505338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:36.004897   32280 type.go:168] "Request Body" body=""
	I1002 20:12:36.005010   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:36.005325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:36.504908   32280 type.go:168] "Request Body" body=""
	I1002 20:12:36.504975   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:36.505273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:36.505325   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:36.922844   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:36.972691   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:36.975093   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:36.975120   32280 retry.go:31] will retry after 6.057609391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:37.005334   32280 type.go:168] "Request Body" body=""
	I1002 20:12:37.005422   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:37.005758   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:37.505360   32280 type.go:168] "Request Body" body=""
	I1002 20:12:37.505473   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:37.505826   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:38.005595   32280 type.go:168] "Request Body" body=""
	I1002 20:12:38.005685   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:38.005995   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:38.505731   32280 type.go:168] "Request Body" body=""
	I1002 20:12:38.505833   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:38.506204   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:38.506258   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:38.584343   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:38.634498   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:38.634541   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:38.634559   32280 retry.go:31] will retry after 11.473349324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:39.004966   32280 type.go:168] "Request Body" body=""
	I1002 20:12:39.005047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:39.005329   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:39.505287   32280 type.go:168] "Request Body" body=""
	I1002 20:12:39.505349   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:39.505690   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:40.005217   32280 type.go:168] "Request Body" body=""
	I1002 20:12:40.005283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:40.005689   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:40.505522   32280 type.go:168] "Request Body" body=""
	I1002 20:12:40.505586   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:40.505931   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:41.005519   32280 type.go:168] "Request Body" body=""
	I1002 20:12:41.005620   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:41.005984   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:41.006049   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:41.505595   32280 type.go:168] "Request Body" body=""
	I1002 20:12:41.505678   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:41.506021   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:42.005588   32280 type.go:168] "Request Body" body=""
	I1002 20:12:42.005666   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:42.005990   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:42.505580   32280 type.go:168] "Request Body" body=""
	I1002 20:12:42.505660   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:42.506010   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:43.005624   32280 type.go:168] "Request Body" body=""
	I1002 20:12:43.005704   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:43.006025   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:43.006077   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:43.033216   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:43.084626   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:43.084680   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:43.084700   32280 retry.go:31] will retry after 13.696949746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:43.504971   32280 type.go:168] "Request Body" body=""
	I1002 20:12:43.505052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:43.505379   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:44.004912   32280 type.go:168] "Request Body" body=""
	I1002 20:12:44.004988   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:44.005285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:44.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:12:44.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:44.505402   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:45.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:12:45.005026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:45.005321   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:45.504904   32280 type.go:168] "Request Body" body=""
	I1002 20:12:45.504997   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:45.505300   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:45.505354   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:46.004960   32280 type.go:168] "Request Body" body=""
	I1002 20:12:46.005023   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:46.005350   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:46.504882   32280 type.go:168] "Request Body" body=""
	I1002 20:12:46.505005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:46.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:47.004909   32280 type.go:168] "Request Body" body=""
	I1002 20:12:47.004973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:47.005265   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:47.505882   32280 type.go:168] "Request Body" body=""
	I1002 20:12:47.506000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:47.506320   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:47.506400   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:48.004928   32280 type.go:168] "Request Body" body=""
	I1002 20:12:48.005004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:48.005305   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:48.504865   32280 type.go:168] "Request Body" body=""
	I1002 20:12:48.504959   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:48.505270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:49.004954   32280 type.go:168] "Request Body" body=""
	I1002 20:12:49.005020   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:49.005323   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:49.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:12:49.505108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:49.505418   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:50.004957   32280 type.go:168] "Request Body" body=""
	I1002 20:12:50.005023   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:50.005336   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:50.005399   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:50.108603   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:50.158622   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:50.158675   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:50.158705   32280 retry.go:31] will retry after 7.866512619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:50.505487   32280 type.go:168] "Request Body" body=""
	I1002 20:12:50.505555   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:50.505903   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:51.005559   32280 type.go:168] "Request Body" body=""
	I1002 20:12:51.005635   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:51.005990   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:51.505707   32280 type.go:168] "Request Body" body=""
	I1002 20:12:51.505791   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:51.506153   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:52.005777   32280 type.go:168] "Request Body" body=""
	I1002 20:12:52.005901   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:52.006225   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:52.006281   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:52.504874   32280 type.go:168] "Request Body" body=""
	I1002 20:12:52.504935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:52.505268   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:53.005873   32280 type.go:168] "Request Body" body=""
	I1002 20:12:53.005951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:53.006260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:53.504881   32280 type.go:168] "Request Body" body=""
	I1002 20:12:53.505006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:53.505318   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:54.004965   32280 type.go:168] "Request Body" body=""
	I1002 20:12:54.005040   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:54.005355   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:54.505336   32280 type.go:168] "Request Body" body=""
	I1002 20:12:54.505429   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:54.505803   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:54.505860   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:55.005500   32280 type.go:168] "Request Body" body=""
	I1002 20:12:55.005582   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:55.005971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:55.505630   32280 type.go:168] "Request Body" body=""
	I1002 20:12:55.505727   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:55.506074   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:56.005749   32280 type.go:168] "Request Body" body=""
	I1002 20:12:56.005828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:56.006175   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:56.505817   32280 type.go:168] "Request Body" body=""
	I1002 20:12:56.505912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:56.506244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:56.506305   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:56.782639   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:56.831722   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:56.833971   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:56.834005   32280 retry.go:31] will retry after 8.803585786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:57.005357   32280 type.go:168] "Request Body" body=""
	I1002 20:12:57.005440   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:57.005756   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:57.505340   32280 type.go:168] "Request Body" body=""
	I1002 20:12:57.505420   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:57.505751   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:58.005333   32280 type.go:168] "Request Body" body=""
	I1002 20:12:58.005402   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:58.005752   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:58.025966   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:58.074036   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:58.076335   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:58.076367   32280 retry.go:31] will retry after 21.837732561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:58.504884   32280 type.go:168] "Request Body" body=""
	I1002 20:12:58.504952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:58.505269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:59.005019   32280 type.go:168] "Request Body" body=""
	I1002 20:12:59.005088   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:59.005416   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:59.005476   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:59.505294   32280 type.go:168] "Request Body" body=""
	I1002 20:12:59.505371   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:59.505719   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:00.005587   32280 type.go:168] "Request Body" body=""
	I1002 20:13:00.005681   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:00.006070   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:00.505895   32280 type.go:168] "Request Body" body=""
	I1002 20:13:00.505970   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:00.506282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:01.005032   32280 type.go:168] "Request Body" body=""
	I1002 20:13:01.005101   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:01.005454   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:01.005507   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:01.505230   32280 type.go:168] "Request Body" body=""
	I1002 20:13:01.505332   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:01.505713   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:02.005565   32280 type.go:168] "Request Body" body=""
	I1002 20:13:02.005638   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:02.005989   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:02.505747   32280 type.go:168] "Request Body" body=""
	I1002 20:13:02.505834   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:02.506161   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:03.004921   32280 type.go:168] "Request Body" body=""
	I1002 20:13:03.004999   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:03.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:03.505030   32280 type.go:168] "Request Body" body=""
	I1002 20:13:03.505163   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:03.505496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:03.505553   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:04.005013   32280 type.go:168] "Request Body" body=""
	I1002 20:13:04.005102   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:04.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:04.505235   32280 type.go:168] "Request Body" body=""
	I1002 20:13:04.505310   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:04.505603   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:05.005373   32280 type.go:168] "Request Body" body=""
	I1002 20:13:05.005436   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:05.005779   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:05.505626   32280 type.go:168] "Request Body" body=""
	I1002 20:13:05.505713   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:05.506017   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:05.506071   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:05.638454   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:05.690182   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:05.690237   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:05.690256   32280 retry.go:31] will retry after 17.824989731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:06.005701   32280 type.go:168] "Request Body" body=""
	I1002 20:13:06.005799   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:06.006119   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:06.504842   32280 type.go:168] "Request Body" body=""
	I1002 20:13:06.504914   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:06.505251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:07.005004   32280 type.go:168] "Request Body" body=""
	I1002 20:13:07.005108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:07.005436   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:07.505210   32280 type.go:168] "Request Body" body=""
	I1002 20:13:07.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:07.505609   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:08.005363   32280 type.go:168] "Request Body" body=""
	I1002 20:13:08.005446   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:08.005783   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:08.005845   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:08.505633   32280 type.go:168] "Request Body" body=""
	I1002 20:13:08.505725   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:08.506087   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:09.004810   32280 type.go:168] "Request Body" body=""
	I1002 20:13:09.004939   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:09.005246   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:09.505036   32280 type.go:168] "Request Body" body=""
	I1002 20:13:09.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:09.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:10.005227   32280 type.go:168] "Request Body" body=""
	I1002 20:13:10.005294   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:10.005624   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:10.505218   32280 type.go:168] "Request Body" body=""
	I1002 20:13:10.505284   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:10.505609   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:10.505692   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:11.005490   32280 type.go:168] "Request Body" body=""
	I1002 20:13:11.005558   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:11.005879   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:11.505739   32280 type.go:168] "Request Body" body=""
	I1002 20:13:11.505817   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:11.506182   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:12.004937   32280 type.go:168] "Request Body" body=""
	I1002 20:13:12.005026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:12.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:12.505102   32280 type.go:168] "Request Body" body=""
	I1002 20:13:12.505168   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:12.505509   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:13.005242   32280 type.go:168] "Request Body" body=""
	I1002 20:13:13.005316   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:13.005692   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:13.005741   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:13.505519   32280 type.go:168] "Request Body" body=""
	I1002 20:13:13.505584   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:13.505958   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:14.005767   32280 type.go:168] "Request Body" body=""
	I1002 20:13:14.005841   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:14.006164   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:14.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:13:14.505069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:14.505397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:15.005101   32280 type.go:168] "Request Body" body=""
	I1002 20:13:15.005189   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:15.005569   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:15.505328   32280 type.go:168] "Request Body" body=""
	I1002 20:13:15.505404   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:15.505799   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:15.505864   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:16.005581   32280 type.go:168] "Request Body" body=""
	I1002 20:13:16.005659   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:16.006015   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:16.505815   32280 type.go:168] "Request Body" body=""
	I1002 20:13:16.505909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:16.506240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:17.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:13:17.004989   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:17.005317   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:17.505042   32280 type.go:168] "Request Body" body=""
	I1002 20:13:17.505108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:17.505466   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:18.005185   32280 type.go:168] "Request Body" body=""
	I1002 20:13:18.005248   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:18.005594   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:18.005675   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:18.505365   32280 type.go:168] "Request Body" body=""
	I1002 20:13:18.505431   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:18.505829   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.005617   32280 type.go:168] "Request Body" body=""
	I1002 20:13:19.005703   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:19.006054   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.505860   32280 type.go:168] "Request Body" body=""
	I1002 20:13:19.505925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:19.506274   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.914795   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:13:19.964946   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:19.964982   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:19.964998   32280 retry.go:31] will retry after 37.877741779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:20.005163   32280 type.go:168] "Request Body" body=""
	I1002 20:13:20.005260   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:20.005579   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:20.505603   32280 type.go:168] "Request Body" body=""
	I1002 20:13:20.505696   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:20.506040   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:20.506105   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:21.005687   32280 type.go:168] "Request Body" body=""
	I1002 20:13:21.005752   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:21.006074   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:21.505754   32280 type.go:168] "Request Body" body=""
	I1002 20:13:21.505828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:21.506211   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:22.005841   32280 type.go:168] "Request Body" body=""
	I1002 20:13:22.005906   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:22.006231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:22.505901   32280 type.go:168] "Request Body" body=""
	I1002 20:13:22.506010   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:22.506365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:22.506463   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:23.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:13:23.005035   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:23.005390   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:23.504963   32280 type.go:168] "Request Body" body=""
	I1002 20:13:23.505048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:23.505365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:23.515608   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:23.566822   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:23.566879   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:23.566903   32280 retry.go:31] will retry after 23.13190401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:24.005366   32280 type.go:168] "Request Body" body=""
	I1002 20:13:24.005433   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:24.005789   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:24.505700   32280 type.go:168] "Request Body" body=""
	I1002 20:13:24.505774   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:24.506172   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:25.005817   32280 type.go:168] "Request Body" body=""
	I1002 20:13:25.005885   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:25.006218   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:25.006274   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:25.505892   32280 type.go:168] "Request Body" body=""
	I1002 20:13:25.505960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:25.506325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:26.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:13:26.005093   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:26.005420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:26.505011   32280 type.go:168] "Request Body" body=""
	I1002 20:13:26.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:26.505477   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:27.005016   32280 type.go:168] "Request Body" body=""
	I1002 20:13:27.005085   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:27.005415   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:27.505000   32280 type.go:168] "Request Body" body=""
	I1002 20:13:27.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:27.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:27.505471   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:28.004995   32280 type.go:168] "Request Body" body=""
	I1002 20:13:28.005059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:28.005387   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:28.504979   32280 type.go:168] "Request Body" body=""
	I1002 20:13:28.505063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:28.505426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:29.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:13:29.005070   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:29.005385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:29.505292   32280 type.go:168] "Request Body" body=""
	I1002 20:13:29.505364   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:29.505745   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:29.505830   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:30.005263   32280 type.go:168] "Request Body" body=""
	I1002 20:13:30.005354   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:30.005711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:30.505565   32280 type.go:168] "Request Body" body=""
	I1002 20:13:30.505630   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:30.505975   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:31.005629   32280 type.go:168] "Request Body" body=""
	I1002 20:13:31.005725   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:31.006066   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:31.505717   32280 type.go:168] "Request Body" body=""
	I1002 20:13:31.505806   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:31.506146   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:31.506205   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:32.005772   32280 type.go:168] "Request Body" body=""
	I1002 20:13:32.005834   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:32.006141   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:32.505757   32280 type.go:168] "Request Body" body=""
	I1002 20:13:32.505827   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:32.506202   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:33.005813   32280 type.go:168] "Request Body" body=""
	I1002 20:13:33.005879   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:33.006207   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:33.505854   32280 type.go:168] "Request Body" body=""
	I1002 20:13:33.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:33.506299   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:33.506364   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:34.004865   32280 type.go:168] "Request Body" body=""
	I1002 20:13:34.004937   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:34.005277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:34.505059   32280 type.go:168] "Request Body" body=""
	I1002 20:13:34.505145   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:34.505557   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:35.005136   32280 type.go:168] "Request Body" body=""
	I1002 20:13:35.005210   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:35.005522   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:35.505130   32280 type.go:168] "Request Body" body=""
	I1002 20:13:35.505200   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:35.505574   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:36.005135   32280 type.go:168] "Request Body" body=""
	I1002 20:13:36.005205   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:36.005539   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:36.005593   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:36.505113   32280 type.go:168] "Request Body" body=""
	I1002 20:13:36.505181   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:36.505623   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:37.005206   32280 type.go:168] "Request Body" body=""
	I1002 20:13:37.005280   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:37.005599   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:37.505187   32280 type.go:168] "Request Body" body=""
	I1002 20:13:37.505253   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:37.505612   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:38.005212   32280 type.go:168] "Request Body" body=""
	I1002 20:13:38.005309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:38.005632   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:38.005716   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:38.505228   32280 type.go:168] "Request Body" body=""
	I1002 20:13:38.505309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:38.505743   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:39.005283   32280 type.go:168] "Request Body" body=""
	I1002 20:13:39.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:39.005688   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:39.505535   32280 type.go:168] "Request Body" body=""
	I1002 20:13:39.505601   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:39.505971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:40.005741   32280 type.go:168] "Request Body" body=""
	I1002 20:13:40.005811   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:40.006142   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:40.006200   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:40.504914   32280 type.go:168] "Request Body" body=""
	I1002 20:13:40.504981   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:40.505341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:41.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:13:41.004987   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:41.005308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:41.504899   32280 type.go:168] "Request Body" body=""
	I1002 20:13:41.504961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:41.505269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:42.004835   32280 type.go:168] "Request Body" body=""
	I1002 20:13:42.004910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:42.005229   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:42.504815   32280 type.go:168] "Request Body" body=""
	I1002 20:13:42.504896   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:42.505252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:42.505312   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:43.004917   32280 type.go:168] "Request Body" body=""
	I1002 20:13:43.004992   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:43.005315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:43.504906   32280 type.go:168] "Request Body" body=""
	I1002 20:13:43.504998   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:43.505371   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:44.004978   32280 type.go:168] "Request Body" body=""
	I1002 20:13:44.005041   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:44.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:44.505515   32280 type.go:168] "Request Body" body=""
	I1002 20:13:44.505582   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:44.505949   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:44.505999   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:45.005614   32280 type.go:168] "Request Body" body=""
	I1002 20:13:45.005720   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:45.006047   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:45.505675   32280 type.go:168] "Request Body" body=""
	I1002 20:13:45.505766   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:45.506082   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:46.005784   32280 type.go:168] "Request Body" body=""
	I1002 20:13:46.005862   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:46.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:46.505803   32280 type.go:168] "Request Body" body=""
	I1002 20:13:46.505894   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:46.506217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:46.506269   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:46.699644   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:46.747344   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:46.749844   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:46.749973   32280 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:13:47.005313   32280 type.go:168] "Request Body" body=""
	I1002 20:13:47.005446   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:47.005788   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:47.505665   32280 type.go:168] "Request Body" body=""
	I1002 20:13:47.505730   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:47.506069   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:48.005897   32280 type.go:168] "Request Body" body=""
	I1002 20:13:48.005960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:48.006265   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:48.505011   32280 type.go:168] "Request Body" body=""
	I1002 20:13:48.505103   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:48.505428   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:49.005178   32280 type.go:168] "Request Body" body=""
	I1002 20:13:49.005244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:49.005588   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:49.005688   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:49.505357   32280 type.go:168] "Request Body" body=""
	I1002 20:13:49.505439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:49.505750   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:50.005608   32280 type.go:168] "Request Body" body=""
	I1002 20:13:50.005698   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:50.006038   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:50.504871   32280 type.go:168] "Request Body" body=""
	I1002 20:13:50.504975   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:50.505342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:51.005115   32280 type.go:168] "Request Body" body=""
	I1002 20:13:51.005179   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:51.005488   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:51.505213   32280 type.go:168] "Request Body" body=""
	I1002 20:13:51.505301   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:51.505613   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:51.505717   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:52.005522   32280 type.go:168] "Request Body" body=""
	I1002 20:13:52.005612   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:52.005939   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:52.505742   32280 type.go:168] "Request Body" body=""
	I1002 20:13:52.505819   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:52.506150   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:53.004884   32280 type.go:168] "Request Body" body=""
	I1002 20:13:53.004954   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:53.005266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:53.505024   32280 type.go:168] "Request Body" body=""
	I1002 20:13:53.505129   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:53.505472   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:54.005163   32280 type.go:168] "Request Body" body=""
	I1002 20:13:54.005247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:54.005550   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:54.005630   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:54.505374   32280 type.go:168] "Request Body" body=""
	I1002 20:13:54.505439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:54.505844   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:55.005681   32280 type.go:168] "Request Body" body=""
	I1002 20:13:55.005746   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:55.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:55.504859   32280 type.go:168] "Request Body" body=""
	I1002 20:13:55.504950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:55.505290   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:56.004973   32280 type.go:168] "Request Body" body=""
	I1002 20:13:56.005052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:56.005360   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:56.505092   32280 type.go:168] "Request Body" body=""
	I1002 20:13:56.505157   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:56.505484   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:56.505543   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:57.005232   32280 type.go:168] "Request Body" body=""
	I1002 20:13:57.005319   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:57.005627   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:57.505479   32280 type.go:168] "Request Body" body=""
	I1002 20:13:57.505542   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:57.505874   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:57.843521   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:13:57.893953   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:57.894023   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:57.894118   32280 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:13:57.896474   32280 out.go:179] * Enabled addons: 
	I1002 20:13:57.898063   32280 addons.go:514] duration metric: took 1m37.510002204s for enable addons: enabled=[]
	I1002 20:13:58.005248   32280 type.go:168] "Request Body" body=""
	I1002 20:13:58.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:58.005671   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:58.505487   32280 type.go:168] "Request Body" body=""
	I1002 20:13:58.505565   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:58.505958   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:58.506014   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:59.005771   32280 type.go:168] "Request Body" body=""
	I1002 20:13:59.005876   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:59.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:59.504962   32280 type.go:168] "Request Body" body=""
	I1002 20:13:59.505047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:59.505359   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:00.005006   32280 type.go:168] "Request Body" body=""
	I1002 20:14:00.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:00.005392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:00.505111   32280 type.go:168] "Request Body" body=""
	I1002 20:14:00.505199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:00.505503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:01.005227   32280 type.go:168] "Request Body" body=""
	I1002 20:14:01.005326   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:01.005717   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:01.005789   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:01.505598   32280 type.go:168] "Request Body" body=""
	I1002 20:14:01.505687   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:01.506000   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:02.005861   32280 type.go:168] "Request Body" body=""
	I1002 20:14:02.005935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:02.006338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:02.504980   32280 type.go:168] "Request Body" body=""
	I1002 20:14:02.505043   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:02.505444   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:03.005199   32280 type.go:168] "Request Body" body=""
	I1002 20:14:03.005295   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:03.005617   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:03.505417   32280 type.go:168] "Request Body" body=""
	I1002 20:14:03.505500   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:03.505831   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:03.505910   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:04.005688   32280 type.go:168] "Request Body" body=""
	I1002 20:14:04.005768   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:04.006079   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:04.505822   32280 type.go:168] "Request Body" body=""
	I1002 20:14:04.505929   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:04.506212   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:05.004939   32280 type.go:168] "Request Body" body=""
	I1002 20:14:05.005032   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:05.005365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:05.505085   32280 type.go:168] "Request Body" body=""
	I1002 20:14:05.505167   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:05.505489   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:06.005229   32280 type.go:168] "Request Body" body=""
	I1002 20:14:06.005293   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:06.005679   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:06.005733   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:06.505561   32280 type.go:168] "Request Body" body=""
	I1002 20:14:06.505662   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:06.505997   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:07.005758   32280 type.go:168] "Request Body" body=""
	I1002 20:14:07.005865   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:07.006186   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:07.504924   32280 type.go:168] "Request Body" body=""
	I1002 20:14:07.504999   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:07.505319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:08.005020   32280 type.go:168] "Request Body" body=""
	I1002 20:14:08.005110   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:08.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:08.505144   32280 type.go:168] "Request Body" body=""
	I1002 20:14:08.505221   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:08.505546   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:08.505597   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:09.005324   32280 type.go:168] "Request Body" body=""
	I1002 20:14:09.005388   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:09.005759   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:09.505663   32280 type.go:168] "Request Body" body=""
	I1002 20:14:09.505738   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:09.506059   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:10.004831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:10.004913   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:10.005285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:10.504951   32280 type.go:168] "Request Body" body=""
	I1002 20:14:10.505047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:10.505396   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:11.005158   32280 type.go:168] "Request Body" body=""
	I1002 20:14:11.005275   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:11.005733   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:11.005797   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:11.505549   32280 type.go:168] "Request Body" body=""
	I1002 20:14:11.505697   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:11.506073   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:12.005903   32280 type.go:168] "Request Body" body=""
	I1002 20:14:12.005966   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:12.006268   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:12.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:14:12.505086   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:12.505427   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:13.004849   32280 type.go:168] "Request Body" body=""
	I1002 20:14:13.004968   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:13.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:13.505032   32280 type.go:168] "Request Body" body=""
	I1002 20:14:13.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:13.505438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:13.505493   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:14.005138   32280 type.go:168] "Request Body" body=""
	I1002 20:14:14.005202   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:14.005533   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:14.505306   32280 type.go:168] "Request Body" body=""
	I1002 20:14:14.505402   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:14.505762   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:15.005543   32280 type.go:168] "Request Body" body=""
	I1002 20:14:15.005604   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:15.005962   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:15.505741   32280 type.go:168] "Request Body" body=""
	I1002 20:14:15.505841   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:15.506168   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:15.506245   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:16.005122   32280 type.go:168] "Request Body" body=""
	I1002 20:14:16.005232   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:16.005696   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:16.504984   32280 type.go:168] "Request Body" body=""
	I1002 20:14:16.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:16.505370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:17.004896   32280 type.go:168] "Request Body" body=""
	I1002 20:14:17.004982   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:17.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:17.504836   32280 type.go:168] "Request Body" body=""
	I1002 20:14:17.504907   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:17.505220   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:18.005868   32280 type.go:168] "Request Body" body=""
	I1002 20:14:18.005951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:18.006358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:18.006423   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:18.504940   32280 type.go:168] "Request Body" body=""
	I1002 20:14:18.505026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:18.505333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:19.004866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:19.004945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:19.005275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:19.505078   32280 type.go:168] "Request Body" body=""
	I1002 20:14:19.505155   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:19.505483   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:20.004994   32280 type.go:168] "Request Body" body=""
	I1002 20:14:20.005076   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:20.005381   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:20.505242   32280 type.go:168] "Request Body" body=""
	I1002 20:14:20.505307   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:20.505631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:20.505718   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:21.005226   32280 type.go:168] "Request Body" body=""
	I1002 20:14:21.005289   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:21.005590   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:21.505335   32280 type.go:168] "Request Body" body=""
	I1002 20:14:21.505404   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:21.505749   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:22.005375   32280 type.go:168] "Request Body" body=""
	I1002 20:14:22.005439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:22.005744   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:22.505304   32280 type.go:168] "Request Body" body=""
	I1002 20:14:22.505371   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:22.505716   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:22.505771   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:23.005272   32280 type.go:168] "Request Body" body=""
	I1002 20:14:23.005334   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:23.005644   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:23.505227   32280 type.go:168] "Request Body" body=""
	I1002 20:14:23.505324   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:23.505721   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:24.005280   32280 type.go:168] "Request Body" body=""
	I1002 20:14:24.005348   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:24.005690   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:24.505614   32280 type.go:168] "Request Body" body=""
	I1002 20:14:24.505707   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:24.506064   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:24.506123   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:25.005722   32280 type.go:168] "Request Body" body=""
	I1002 20:14:25.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:25.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:25.505754   32280 type.go:168] "Request Body" body=""
	I1002 20:14:25.505821   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:25.506147   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:26.005768   32280 type.go:168] "Request Body" body=""
	I1002 20:14:26.005838   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:26.006153   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:26.505742   32280 type.go:168] "Request Body" body=""
	I1002 20:14:26.505810   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:26.506121   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:26.506173   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:27.005763   32280 type.go:168] "Request Body" body=""
	I1002 20:14:27.005839   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:27.006182   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:27.505814   32280 type.go:168] "Request Body" body=""
	I1002 20:14:27.505878   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:27.506202   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:28.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:28.005938   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:28.006243   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:28.504821   32280 type.go:168] "Request Body" body=""
	I1002 20:14:28.504889   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:28.505244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:29.005929   32280 type.go:168] "Request Body" body=""
	I1002 20:14:29.005998   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:29.006317   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:29.006373   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:29.505885   32280 type.go:168] "Request Body" body=""
	I1002 20:14:29.505955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:29.506284   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:30.004871   32280 type.go:168] "Request Body" body=""
	I1002 20:14:30.004946   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:30.005283   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:30.505131   32280 type.go:168] "Request Body" body=""
	I1002 20:14:30.505212   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:30.505536   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:31.005137   32280 type.go:168] "Request Body" body=""
	I1002 20:14:31.005230   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:31.005549   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:31.505115   32280 type.go:168] "Request Body" body=""
	I1002 20:14:31.505177   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:31.505493   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:31.505544   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:32.005077   32280 type.go:168] "Request Body" body=""
	I1002 20:14:32.005142   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:32.005447   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:32.505767   32280 type.go:168] "Request Body" body=""
	I1002 20:14:32.505835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:32.506138   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:33.005842   32280 type.go:168] "Request Body" body=""
	I1002 20:14:33.005927   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:33.006231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:33.505868   32280 type.go:168] "Request Body" body=""
	I1002 20:14:33.505947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:33.506252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:33.506315   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:34.004818   32280 type.go:168] "Request Body" body=""
	I1002 20:14:34.004919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:34.005210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:34.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:14:34.505008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:34.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:35.004949   32280 type.go:168] "Request Body" body=""
	I1002 20:14:35.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:35.005319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:35.505837   32280 type.go:168] "Request Body" body=""
	I1002 20:14:35.505935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:35.506248   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:36.005867   32280 type.go:168] "Request Body" body=""
	I1002 20:14:36.005936   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:36.006232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:36.006283   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:36.505902   32280 type.go:168] "Request Body" body=""
	I1002 20:14:36.506056   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:36.506384   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:37.004951   32280 type.go:168] "Request Body" body=""
	I1002 20:14:37.005021   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:37.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:37.504906   32280 type.go:168] "Request Body" body=""
	I1002 20:14:37.504995   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:37.505334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:38.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:14:38.004944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:38.005255   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:38.504831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:38.504917   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:38.505277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:38.505331   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:39.004819   32280 type.go:168] "Request Body" body=""
	I1002 20:14:39.004911   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:39.005204   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:39.505017   32280 type.go:168] "Request Body" body=""
	I1002 20:14:39.505087   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:39.505399   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:40.005080   32280 type.go:168] "Request Body" body=""
	I1002 20:14:40.005144   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:40.005445   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:40.505248   32280 type.go:168] "Request Body" body=""
	I1002 20:14:40.505310   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:40.505614   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:40.505711   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:41.005196   32280 type.go:168] "Request Body" body=""
	I1002 20:14:41.005309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:41.005631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:41.505223   32280 type.go:168] "Request Body" body=""
	I1002 20:14:41.505304   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:41.505623   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:42.005154   32280 type.go:168] "Request Body" body=""
	I1002 20:14:42.005238   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:42.005535   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:42.505095   32280 type.go:168] "Request Body" body=""
	I1002 20:14:42.505175   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:42.505514   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:43.005064   32280 type.go:168] "Request Body" body=""
	I1002 20:14:43.005128   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:43.005441   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:43.005493   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:43.504991   32280 type.go:168] "Request Body" body=""
	I1002 20:14:43.505079   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:43.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:44.004948   32280 type.go:168] "Request Body" body=""
	I1002 20:14:44.005018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:44.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:44.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:14:44.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:44.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:45.004946   32280 type.go:168] "Request Body" body=""
	I1002 20:14:45.005005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:45.005307   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:45.504859   32280 type.go:168] "Request Body" body=""
	I1002 20:14:45.504931   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:45.505245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:45.505309   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:46.005851   32280 type.go:168] "Request Body" body=""
	I1002 20:14:46.005934   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:46.006245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:46.505842   32280 type.go:168] "Request Body" body=""
	I1002 20:14:46.505929   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:46.506226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:47.005902   32280 type.go:168] "Request Body" body=""
	I1002 20:14:47.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:47.006270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:47.504848   32280 type.go:168] "Request Body" body=""
	I1002 20:14:47.504912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:47.505208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:48.005819   32280 type.go:168] "Request Body" body=""
	I1002 20:14:48.005910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:48.006200   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:48.006262   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:48.504839   32280 type.go:168] "Request Body" body=""
	I1002 20:14:48.504925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:48.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:49.004816   32280 type.go:168] "Request Body" body=""
	I1002 20:14:49.004911   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:49.005214   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:49.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:14:49.505022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:49.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:50.004888   32280 type.go:168] "Request Body" body=""
	I1002 20:14:50.004963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:50.005258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:50.505167   32280 type.go:168] "Request Body" body=""
	I1002 20:14:50.505271   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:50.505603   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:50.505700   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:51.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:51.005941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:51.006228   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:51.505859   32280 type.go:168] "Request Body" body=""
	I1002 20:14:51.505973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:51.506301   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:52.004831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:52.004912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:52.005216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:52.504814   32280 type.go:168] "Request Body" body=""
	I1002 20:14:52.504898   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:52.505216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:53.005826   32280 type.go:168] "Request Body" body=""
	I1002 20:14:53.005886   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:53.006180   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:53.006232   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:53.505812   32280 type.go:168] "Request Body" body=""
	I1002 20:14:53.505888   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:53.506201   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:54.005808   32280 type.go:168] "Request Body" body=""
	I1002 20:14:54.005871   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:54.006166   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:54.504871   32280 type.go:168] "Request Body" body=""
	I1002 20:14:54.504938   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:54.505247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:55.004807   32280 type.go:168] "Request Body" body=""
	I1002 20:14:55.004892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:55.005219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:55.505889   32280 type.go:168] "Request Body" body=""
	I1002 20:14:55.505973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:55.506277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:55.506339   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:56.004856   32280 type.go:168] "Request Body" body=""
	I1002 20:14:56.004932   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:56.005222   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:56.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:14:56.504963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:56.505264   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:57.004822   32280 type.go:168] "Request Body" body=""
	I1002 20:14:57.004940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:57.005238   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:57.505875   32280 type.go:168] "Request Body" body=""
	I1002 20:14:57.505940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:57.506273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:58.005858   32280 type.go:168] "Request Body" body=""
	I1002 20:14:58.005932   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:58.006233   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:58.006297   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:58.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:14:58.504910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:58.505221   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:59.005853   32280 type.go:168] "Request Body" body=""
	I1002 20:14:59.005916   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:59.006215   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:59.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:14:59.505079   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:59.505422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:00.005901   32280 type.go:168] "Request Body" body=""
	I1002 20:15:00.005989   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:00.006298   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:00.006348   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:00.505148   32280 type.go:168] "Request Body" body=""
	I1002 20:15:00.505241   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:00.505605   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:01.005169   32280 type.go:168] "Request Body" body=""
	I1002 20:15:01.005247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:01.005557   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:01.505254   32280 type.go:168] "Request Body" body=""
	I1002 20:15:01.505323   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:01.505705   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:02.004981   32280 type.go:168] "Request Body" body=""
	I1002 20:15:02.005068   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:02.005397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:02.505008   32280 type.go:168] "Request Body" body=""
	I1002 20:15:02.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:02.505394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:02.505450   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:03.004993   32280 type.go:168] "Request Body" body=""
	I1002 20:15:03.005058   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:03.005394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:03.504950   32280 type.go:168] "Request Body" body=""
	I1002 20:15:03.505020   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:03.505326   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:04.004918   32280 type.go:168] "Request Body" body=""
	I1002 20:15:04.004994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:04.005296   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:04.504973   32280 type.go:168] "Request Body" body=""
	I1002 20:15:04.505039   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:04.505347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:05.004936   32280 type.go:168] "Request Body" body=""
	I1002 20:15:05.005003   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:05.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:05.005362   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:05.504869   32280 type.go:168] "Request Body" body=""
	I1002 20:15:05.504948   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:05.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:06.004882   32280 type.go:168] "Request Body" body=""
	I1002 20:15:06.004968   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:06.005279   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:06.504947   32280 type.go:168] "Request Body" body=""
	I1002 20:15:06.505019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:06.505334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:07.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:15:07.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:07.005310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:07.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:15:07.505006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:07.505377   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:07.505433   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:08.004961   32280 type.go:168] "Request Body" body=""
	I1002 20:15:08.005028   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:08.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:08.504957   32280 type.go:168] "Request Body" body=""
	I1002 20:15:08.505046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:08.505358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:09.004947   32280 type.go:168] "Request Body" body=""
	I1002 20:15:09.005025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:09.005346   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:09.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:15:09.505247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:09.505575   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:09.505626   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:10.005155   32280 type.go:168] "Request Body" body=""
	I1002 20:15:10.005219   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:10.005531   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:10.505400   32280 type.go:168] "Request Body" body=""
	I1002 20:15:10.505469   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:10.505813   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:11.005490   32280 type.go:168] "Request Body" body=""
	I1002 20:15:11.005553   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:11.005896   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:11.505548   32280 type.go:168] "Request Body" body=""
	I1002 20:15:11.505612   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:11.505961   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:11.506027   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:12.005617   32280 type.go:168] "Request Body" body=""
	I1002 20:15:12.005691   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:12.005983   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:12.505700   32280 type.go:168] "Request Body" body=""
	I1002 20:15:12.505770   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:12.506098   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:13.005755   32280 type.go:168] "Request Body" body=""
	I1002 20:15:13.005828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:13.006168   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:13.505836   32280 type.go:168] "Request Body" body=""
	I1002 20:15:13.505920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:13.506241   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:13.506290   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:14.005887   32280 type.go:168] "Request Body" body=""
	I1002 20:15:14.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:14.006270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:14.505064   32280 type.go:168] "Request Body" body=""
	I1002 20:15:14.505129   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:14.505450   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:15.004995   32280 type.go:168] "Request Body" body=""
	I1002 20:15:15.005063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:15.005377   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:15.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:15:15.504986   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:15.505294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:16.004941   32280 type.go:168] "Request Body" body=""
	I1002 20:15:16.005008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:16.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:16.005376   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:16.504960   32280 type.go:168] "Request Body" body=""
	I1002 20:15:16.505033   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:16.505386   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:17.005033   32280 type.go:168] "Request Body" body=""
	I1002 20:15:17.005095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:17.005406   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:17.504971   32280 type.go:168] "Request Body" body=""
	I1002 20:15:17.505037   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:17.505351   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:18.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:15:18.005879   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:18.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:18.006247   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:18.505849   32280 type.go:168] "Request Body" body=""
	I1002 20:15:18.505919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:18.506247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:19.004886   32280 type.go:168] "Request Body" body=""
	I1002 20:15:19.004961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:19.005261   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:19.505076   32280 type.go:168] "Request Body" body=""
	I1002 20:15:19.505144   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:19.505477   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:20.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:15:20.005071   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:20.005381   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:20.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:15:20.505251   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:20.505582   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:20.505635   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:21.004962   32280 type.go:168] "Request Body" body=""
	I1002 20:15:21.005029   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:21.005332   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:21.504914   32280 type.go:168] "Request Body" body=""
	I1002 20:15:21.504977   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:21.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:22.004889   32280 type.go:168] "Request Body" body=""
	I1002 20:15:22.004987   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:22.005283   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:22.504874   32280 type.go:168] "Request Body" body=""
	I1002 20:15:22.504937   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:22.505267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:23.004838   32280 type.go:168] "Request Body" body=""
	I1002 20:15:23.004900   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:23.005227   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:23.005283   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:23.505836   32280 type.go:168] "Request Body" body=""
	I1002 20:15:23.505908   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:23.506231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:24.005841   32280 type.go:168] "Request Body" body=""
	I1002 20:15:24.005903   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:24.006198   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:24.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:15:24.505059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:24.505375   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:25.004926   32280 type.go:168] "Request Body" body=""
	I1002 20:15:25.005003   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:25.005304   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:25.005362   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:25.504905   32280 type.go:168] "Request Body" body=""
	I1002 20:15:25.504971   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:25.505275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:26.004817   32280 type.go:168] "Request Body" body=""
	I1002 20:15:26.004887   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:26.005210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:26.505879   32280 type.go:168] "Request Body" body=""
	I1002 20:15:26.506038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:26.506430   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:27.005027   32280 type.go:168] "Request Body" body=""
	I1002 20:15:27.005114   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:27.005415   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:27.005474   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:27.505002   32280 type.go:168] "Request Body" body=""
	I1002 20:15:27.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:27.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:28.004986   32280 type.go:168] "Request Body" body=""
	I1002 20:15:28.005053   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:28.005352   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:28.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:15:28.505000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:28.505364   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:29.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:15:29.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:29.005308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:29.505191   32280 type.go:168] "Request Body" body=""
	I1002 20:15:29.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:29.505637   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:29.505741   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:30.005210   32280 type.go:168] "Request Body" body=""
	I1002 20:15:30.005271   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:30.005562   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:30.505505   32280 type.go:168] "Request Body" body=""
	I1002 20:15:30.505575   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:30.505938   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:31.005554   32280 type.go:168] "Request Body" body=""
	I1002 20:15:31.005640   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:31.005967   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:31.505585   32280 type.go:168] "Request Body" body=""
	I1002 20:15:31.505683   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:31.506006   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:31.506056   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:32.005634   32280 type.go:168] "Request Body" body=""
	I1002 20:15:32.005710   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:32.006002   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:32.505666   32280 type.go:168] "Request Body" body=""
	I1002 20:15:32.505734   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:32.506032   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:33.005694   32280 type.go:168] "Request Body" body=""
	I1002 20:15:33.005768   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:33.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:33.505738   32280 type.go:168] "Request Body" body=""
	I1002 20:15:33.505801   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:33.506120   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:33.506192   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:34.005749   32280 type.go:168] "Request Body" body=""
	I1002 20:15:34.005835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:34.006190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:34.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:15:34.505063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:34.505359   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:35.004979   32280 type.go:168] "Request Body" body=""
	I1002 20:15:35.005040   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:35.005361   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:35.504958   32280 type.go:168] "Request Body" body=""
	I1002 20:15:35.505028   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:35.505325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:36.004893   32280 type.go:168] "Request Body" body=""
	I1002 20:15:36.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:36.005275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:36.005327   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:36.504861   32280 type.go:168] "Request Body" body=""
	I1002 20:15:36.504942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:36.505241   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:37.004818   32280 type.go:168] "Request Body" body=""
	I1002 20:15:37.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:37.005203   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:37.504876   32280 type.go:168] "Request Body" body=""
	I1002 20:15:37.504951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:37.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:38.004888   32280 type.go:168] "Request Body" body=""
	I1002 20:15:38.004979   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:38.005286   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:38.504969   32280 type.go:168] "Request Body" body=""
	I1002 20:15:38.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:38.505376   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:38.505429   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:39.004950   32280 type.go:168] "Request Body" body=""
	I1002 20:15:39.005018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:39.005330   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:39.505071   32280 type.go:168] "Request Body" body=""
	I1002 20:15:39.505137   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:39.505431   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:40.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:15:40.005090   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:40.005385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:40.505098   32280 type.go:168] "Request Body" body=""
	I1002 20:15:40.505197   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:40.505502   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:40.505558   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:41.005068   32280 type.go:168] "Request Body" body=""
	I1002 20:15:41.005132   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:41.005435   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:41.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:15:41.505067   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:41.505459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:42.005029   32280 type.go:168] "Request Body" body=""
	I1002 20:15:42.005101   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:42.005410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:42.505061   32280 type.go:168] "Request Body" body=""
	I1002 20:15:42.505128   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:42.505440   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:43.005053   32280 type.go:168] "Request Body" body=""
	I1002 20:15:43.005164   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:43.005534   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:43.005626   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:43.505101   32280 type.go:168] "Request Body" body=""
	I1002 20:15:43.505195   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:43.505496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:44.005084   32280 type.go:168] "Request Body" body=""
	I1002 20:15:44.005178   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:44.005496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:44.505460   32280 type.go:168] "Request Body" body=""
	I1002 20:15:44.505524   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:44.505855   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:45.005560   32280 type.go:168] "Request Body" body=""
	I1002 20:15:45.005631   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:45.005984   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:45.006035   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:45.505602   32280 type.go:168] "Request Body" body=""
	I1002 20:15:45.505705   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:45.506005   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:46.005627   32280 type.go:168] "Request Body" body=""
	I1002 20:15:46.005713   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:46.006024   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:46.505689   32280 type.go:168] "Request Body" body=""
	I1002 20:15:46.505755   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:46.506045   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:47.005272   32280 type.go:168] "Request Body" body=""
	I1002 20:15:47.005340   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:47.005666   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:47.505213   32280 type.go:168] "Request Body" body=""
	I1002 20:15:47.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:47.505638   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:47.505724   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:48.004992   32280 type.go:168] "Request Body" body=""
	I1002 20:15:48.005062   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:48.005371   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:48.504960   32280 type.go:168] "Request Body" body=""
	I1002 20:15:48.505025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:48.505343   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:49.004918   32280 type.go:168] "Request Body" body=""
	I1002 20:15:49.004982   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:49.005325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:49.505056   32280 type.go:168] "Request Body" body=""
	I1002 20:15:49.505122   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:49.505424   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:50.004984   32280 type.go:168] "Request Body" body=""
	I1002 20:15:50.005048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:50.005347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:50.005399   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:50.505099   32280 type.go:168] "Request Body" body=""
	I1002 20:15:50.505173   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:50.505478   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:51.005059   32280 type.go:168] "Request Body" body=""
	I1002 20:15:51.005133   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:51.005463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:51.505016   32280 type.go:168] "Request Body" body=""
	I1002 20:15:51.505084   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:51.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:52.005067   32280 type.go:168] "Request Body" body=""
	I1002 20:15:52.005155   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:52.005476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:52.005533   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:52.505040   32280 type.go:168] "Request Body" body=""
	I1002 20:15:52.505105   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:52.505403   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:53.004962   32280 type.go:168] "Request Body" body=""
	I1002 20:15:53.005025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:53.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:53.504924   32280 type.go:168] "Request Body" body=""
	I1002 20:15:53.505008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:53.505327   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:54.004900   32280 type.go:168] "Request Body" body=""
	I1002 20:15:54.004970   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:54.005314   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:54.505066   32280 type.go:168] "Request Body" body=""
	I1002 20:15:54.505137   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:54.505438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:54.505496   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:55.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:15:55.005067   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:55.005372   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:55.504901   32280 type.go:168] "Request Body" body=""
	I1002 20:15:55.504971   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:55.505282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:56.004915   32280 type.go:168] "Request Body" body=""
	I1002 20:15:56.004985   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:56.005314   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:56.504880   32280 type.go:168] "Request Body" body=""
	I1002 20:15:56.504955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:56.505267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:57.004835   32280 type.go:168] "Request Body" body=""
	I1002 20:15:57.004920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:57.005242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:57.005291   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:57.505877   32280 type.go:168] "Request Body" body=""
	I1002 20:15:57.505940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:57.506245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:58.005907   32280 type.go:168] "Request Body" body=""
	I1002 20:15:58.005991   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:58.006342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:58.504964   32280 type.go:168] "Request Body" body=""
	I1002 20:15:58.505032   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:58.505329   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:59.004907   32280 type.go:168] "Request Body" body=""
	I1002 20:15:59.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:59.005333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:59.005397   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:59.505208   32280 type.go:168] "Request Body" body=""
	I1002 20:15:59.505273   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:59.505578   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:00.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:16:00.005070   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:00.005368   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:00.505154   32280 type.go:168] "Request Body" body=""
	I1002 20:16:00.505223   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:00.505548   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:01.005111   32280 type.go:168] "Request Body" body=""
	I1002 20:16:01.005187   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:01.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:01.005546   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:01.505154   32280 type.go:168] "Request Body" body=""
	I1002 20:16:01.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:01.505529   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:02.005146   32280 type.go:168] "Request Body" body=""
	I1002 20:16:02.005224   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:02.005550   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:02.505113   32280 type.go:168] "Request Body" body=""
	I1002 20:16:02.505181   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:02.505501   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:03.005066   32280 type.go:168] "Request Body" body=""
	I1002 20:16:03.005132   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:03.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:03.505093   32280 type.go:168] "Request Body" body=""
	I1002 20:16:03.505162   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:03.505508   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:03.505564   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:04.005055   32280 type.go:168] "Request Body" body=""
	I1002 20:16:04.005119   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:04.005406   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:04.505180   32280 type.go:168] "Request Body" body=""
	I1002 20:16:04.505248   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:04.505566   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:05.005130   32280 type.go:168] "Request Body" body=""
	I1002 20:16:05.005192   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:05.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:05.505063   32280 type.go:168] "Request Body" body=""
	I1002 20:16:05.505131   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:05.505442   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:06.005022   32280 type.go:168] "Request Body" body=""
	I1002 20:16:06.005086   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:06.005392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:06.005444   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:06.505030   32280 type.go:168] "Request Body" body=""
	I1002 20:16:06.505095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:06.505395   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:07.004971   32280 type.go:168] "Request Body" body=""
	I1002 20:16:07.005038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:07.005337   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:07.504911   32280 type.go:168] "Request Body" body=""
	I1002 20:16:07.505004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:07.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:08.004917   32280 type.go:168] "Request Body" body=""
	I1002 20:16:08.004990   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:08.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:08.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:16:08.504958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:08.505256   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:08.505311   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:09.005884   32280 type.go:168] "Request Body" body=""
	I1002 20:16:09.005950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:09.006258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:09.505071   32280 type.go:168] "Request Body" body=""
	I1002 20:16:09.505141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:09.505485   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:10.005085   32280 type.go:168] "Request Body" body=""
	I1002 20:16:10.005150   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:10.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:10.505286   32280 type.go:168] "Request Body" body=""
	I1002 20:16:10.505357   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:10.505685   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:10.505751   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:11.005245   32280 type.go:168] "Request Body" body=""
	I1002 20:16:11.005311   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:11.005606   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:11.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:16:11.505245   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:11.505547   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:12.005105   32280 type.go:168] "Request Body" body=""
	I1002 20:16:12.005169   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:12.005459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:12.505029   32280 type.go:168] "Request Body" body=""
	I1002 20:16:12.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:12.505392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:13.005040   32280 type.go:168] "Request Body" body=""
	I1002 20:16:13.005104   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:13.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:13.005474   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:13.504990   32280 type.go:168] "Request Body" body=""
	I1002 20:16:13.505055   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:13.505357   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:14.004946   32280 type.go:168] "Request Body" body=""
	I1002 20:16:14.005015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:14.005324   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:14.505076   32280 type.go:168] "Request Body" body=""
	I1002 20:16:14.505142   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:14.505433   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:15.005063   32280 type.go:168] "Request Body" body=""
	I1002 20:16:15.005134   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:15.005446   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:15.005498   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:15.504952   32280 type.go:168] "Request Body" body=""
	I1002 20:16:15.505022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:15.505328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:16.004912   32280 type.go:168] "Request Body" body=""
	I1002 20:16:16.004990   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:16.005339   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:16.505464   32280 type.go:168] "Request Body" body=""
	I1002 20:16:16.505571   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:16.505963   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:17.005818   32280 type.go:168] "Request Body" body=""
	I1002 20:16:17.005930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:17.006240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:17.006295   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:17.504827   32280 type.go:168] "Request Body" body=""
	I1002 20:16:17.504891   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:17.505213   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:18.005877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:18.005946   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:18.006281   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:18.505257   32280 type.go:168] "Request Body" body=""
	I1002 20:16:18.505334   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:18.505711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:19.005252   32280 type.go:168] "Request Body" body=""
	I1002 20:16:19.005317   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:19.005634   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:19.505459   32280 type.go:168] "Request Body" body=""
	I1002 20:16:19.505521   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:19.505917   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:19.505979   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:20.005531   32280 type.go:168] "Request Body" body=""
	I1002 20:16:20.005594   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:20.005938   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:20.505740   32280 type.go:168] "Request Body" body=""
	I1002 20:16:20.505803   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:20.506120   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:21.005728   32280 type.go:168] "Request Body" body=""
	I1002 20:16:21.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:21.006134   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:21.505734   32280 type.go:168] "Request Body" body=""
	I1002 20:16:21.505799   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:21.506152   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:21.506214   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:22.005776   32280 type.go:168] "Request Body" body=""
	I1002 20:16:22.005835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:22.006129   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:22.505854   32280 type.go:168] "Request Body" body=""
	I1002 20:16:22.505921   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:22.506271   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:23.004819   32280 type.go:168] "Request Body" body=""
	I1002 20:16:23.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:23.005226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:23.504886   32280 type.go:168] "Request Body" body=""
	I1002 20:16:23.504953   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:23.505310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:24.004892   32280 type.go:168] "Request Body" body=""
	I1002 20:16:24.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:24.005258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:24.005327   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:24.505080   32280 type.go:168] "Request Body" body=""
	I1002 20:16:24.505161   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:24.505504   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:25.005053   32280 type.go:168] "Request Body" body=""
	I1002 20:16:25.005119   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:25.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:25.505026   32280 type.go:168] "Request Body" body=""
	I1002 20:16:25.505087   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:25.505410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:26.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:16:26.005021   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:26.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:26.005378   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:26.504910   32280 type.go:168] "Request Body" body=""
	I1002 20:16:26.504977   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:26.505326   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:27.005842   32280 type.go:168] "Request Body" body=""
	I1002 20:16:27.005906   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:27.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:27.505877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:27.505952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:27.506276   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:28.004832   32280 type.go:168] "Request Body" body=""
	I1002 20:16:28.004908   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:28.005212   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:28.505846   32280 type.go:168] "Request Body" body=""
	I1002 20:16:28.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:28.506279   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:28.506330   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:29.004829   32280 type.go:168] "Request Body" body=""
	I1002 20:16:29.004904   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:29.005217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:29.505056   32280 type.go:168] "Request Body" body=""
	I1002 20:16:29.505125   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:29.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:30.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:16:30.005075   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:30.005370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:30.505105   32280 type.go:168] "Request Body" body=""
	I1002 20:16:30.505170   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:30.505455   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:31.005091   32280 type.go:168] "Request Body" body=""
	I1002 20:16:31.005160   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:31.005463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:31.005521   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:31.504995   32280 type.go:168] "Request Body" body=""
	I1002 20:16:31.505061   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:31.505362   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:32.005845   32280 type.go:168] "Request Body" body=""
	I1002 20:16:32.005909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:32.006188   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:32.505814   32280 type.go:168] "Request Body" body=""
	I1002 20:16:32.505878   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:32.506185   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:33.005817   32280 type.go:168] "Request Body" body=""
	I1002 20:16:33.005884   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:33.006190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:33.006257   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:33.505816   32280 type.go:168] "Request Body" body=""
	I1002 20:16:33.505892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:33.506205   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:34.005835   32280 type.go:168] "Request Body" body=""
	I1002 20:16:34.005898   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:34.006219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:34.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:16:34.504996   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:34.505358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:35.004928   32280 type.go:168] "Request Body" body=""
	I1002 20:16:35.005004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:35.005345   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:35.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:16:35.504994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:35.505319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:35.505372   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:36.004925   32280 type.go:168] "Request Body" body=""
	I1002 20:16:36.004992   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:36.005316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:36.504877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:36.504954   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:36.505294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:37.005839   32280 type.go:168] "Request Body" body=""
	I1002 20:16:37.005910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:37.006248   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:37.505873   32280 type.go:168] "Request Body" body=""
	I1002 20:16:37.505941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:37.506266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:37.506318   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:38.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:16:38.005944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:38.006246   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:38.504902   32280 type.go:168] "Request Body" body=""
	I1002 20:16:38.504969   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:38.505303   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:39.004874   32280 type.go:168] "Request Body" body=""
	I1002 20:16:39.004947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:39.005260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:39.505046   32280 type.go:168] "Request Body" body=""
	I1002 20:16:39.505118   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:39.505463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:40.004989   32280 type.go:168] "Request Body" body=""
	I1002 20:16:40.005054   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:40.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:40.005393   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:40.505166   32280 type.go:168] "Request Body" body=""
	I1002 20:16:40.505235   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:40.505560   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:41.005152   32280 type.go:168] "Request Body" body=""
	I1002 20:16:41.005218   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:41.005554   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:41.505090   32280 type.go:168] "Request Body" body=""
	I1002 20:16:41.505158   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:41.505444   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:42.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:16:42.005138   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:42.005449   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:42.005504   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:42.505067   32280 type.go:168] "Request Body" body=""
	I1002 20:16:42.505134   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:42.505424   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:43.004978   32280 type.go:168] "Request Body" body=""
	I1002 20:16:43.005045   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:43.005360   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:43.504918   32280 type.go:168] "Request Body" body=""
	I1002 20:16:43.504994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:43.505315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:44.004897   32280 type.go:168] "Request Body" body=""
	I1002 20:16:44.004973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:44.005278   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:44.505052   32280 type.go:168] "Request Body" body=""
	I1002 20:16:44.505115   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:44.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:44.505478   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:45.004947   32280 type.go:168] "Request Body" body=""
	I1002 20:16:45.005019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:45.005322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:45.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:16:45.504993   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:45.505338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:46.004905   32280 type.go:168] "Request Body" body=""
	I1002 20:16:46.004979   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:46.005286   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:46.504835   32280 type.go:168] "Request Body" body=""
	I1002 20:16:46.504925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:46.505219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:47.005826   32280 type.go:168] "Request Body" body=""
	I1002 20:16:47.005892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:47.006200   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:47.006269   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:47.505816   32280 type.go:168] "Request Body" body=""
	I1002 20:16:47.505884   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:47.506197   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:48.005806   32280 type.go:168] "Request Body" body=""
	I1002 20:16:48.005870   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:48.006179   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:48.505827   32280 type.go:168] "Request Body" body=""
	I1002 20:16:48.505888   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:48.506194   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:49.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:16:49.005894   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:49.006203   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:49.504963   32280 type.go:168] "Request Body" body=""
	I1002 20:16:49.505034   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:49.505380   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:49.505431   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:50.004940   32280 type.go:168] "Request Body" body=""
	I1002 20:16:50.005017   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:50.005304   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:50.505134   32280 type.go:168] "Request Body" body=""
	I1002 20:16:50.505201   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:50.505531   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:51.005099   32280 type.go:168] "Request Body" body=""
	I1002 20:16:51.005174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:51.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:51.505049   32280 type.go:168] "Request Body" body=""
	I1002 20:16:51.505116   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:51.505426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:51.505479   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:52.005030   32280 type.go:168] "Request Body" body=""
	I1002 20:16:52.005095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:52.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:52.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:16:52.505051   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:52.505356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:53.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:16:53.005183   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:53.005527   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:53.504966   32280 type.go:168] "Request Body" body=""
	I1002 20:16:53.505048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:53.505395   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:54.004967   32280 type.go:168] "Request Body" body=""
	I1002 20:16:54.005059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:54.005352   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:54.005404   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:54.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:16:54.505052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:54.505382   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:55.005054   32280 type.go:168] "Request Body" body=""
	I1002 20:16:55.005127   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:55.005439   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:55.505031   32280 type.go:168] "Request Body" body=""
	I1002 20:16:55.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:55.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:56.004980   32280 type.go:168] "Request Body" body=""
	I1002 20:16:56.005046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:56.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:56.504959   32280 type.go:168] "Request Body" body=""
	I1002 20:16:56.505036   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:56.505388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:56.505446   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:57.004963   32280 type.go:168] "Request Body" body=""
	I1002 20:16:57.005036   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:57.005356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:57.504925   32280 type.go:168] "Request Body" body=""
	I1002 20:16:57.505000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:57.505300   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:58.004883   32280 type.go:168] "Request Body" body=""
	I1002 20:16:58.004958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:58.005266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:58.504830   32280 type.go:168] "Request Body" body=""
	I1002 20:16:58.504897   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:58.505217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:59.005867   32280 type.go:168] "Request Body" body=""
	I1002 20:16:59.005934   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:59.006232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:59.006289   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:59.505004   32280 type.go:168] "Request Body" body=""
	I1002 20:16:59.505077   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:59.505422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:00.004977   32280 type.go:168] "Request Body" body=""
	I1002 20:17:00.005041   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:00.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:00.505177   32280 type.go:168] "Request Body" body=""
	I1002 20:17:00.505244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:00.505577   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:01.005106   32280 type.go:168] "Request Body" body=""
	I1002 20:17:01.005177   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:01.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:01.505109   32280 type.go:168] "Request Body" body=""
	I1002 20:17:01.505191   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:01.505585   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:01.505680   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:02.005132   32280 type.go:168] "Request Body" body=""
	I1002 20:17:02.005199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:02.005526   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:02.505094   32280 type.go:168] "Request Body" body=""
	I1002 20:17:02.505168   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:02.505564   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:03.005060   32280 type.go:168] "Request Body" body=""
	I1002 20:17:03.005126   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:03.005440   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:03.504982   32280 type.go:168] "Request Body" body=""
	I1002 20:17:03.505055   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:03.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:04.004973   32280 type.go:168] "Request Body" body=""
	I1002 20:17:04.005038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:04.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:04.005404   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:04.505123   32280 type.go:168] "Request Body" body=""
	I1002 20:17:04.505251   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:04.505555   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:05.005089   32280 type.go:168] "Request Body" body=""
	I1002 20:17:05.005151   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:05.005451   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:05.505031   32280 type.go:168] "Request Body" body=""
	I1002 20:17:05.505104   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:05.505423   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:06.004950   32280 type.go:168] "Request Body" body=""
	I1002 20:17:06.005039   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:06.005333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:06.504958   32280 type.go:168] "Request Body" body=""
	I1002 20:17:06.505029   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:06.505369   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:06.505429   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:07.004923   32280 type.go:168] "Request Body" body=""
	I1002 20:17:07.004993   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:07.005301   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:07.504862   32280 type.go:168] "Request Body" body=""
	I1002 20:17:07.504930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:07.505255   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:08.004807   32280 type.go:168] "Request Body" body=""
	I1002 20:17:08.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:08.005186   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:08.505831   32280 type.go:168] "Request Body" body=""
	I1002 20:17:08.505899   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:08.506230   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:08.506299   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:09.005828   32280 type.go:168] "Request Body" body=""
	I1002 20:17:09.005891   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:09.006223   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:09.505024   32280 type.go:168] "Request Body" body=""
	I1002 20:17:09.505092   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:09.505459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:10.005013   32280 type.go:168] "Request Body" body=""
	I1002 20:17:10.005077   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:10.005388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:10.505140   32280 type.go:168] "Request Body" body=""
	I1002 20:17:10.505212   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:10.505598   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:11.005128   32280 type.go:168] "Request Body" body=""
	I1002 20:17:11.005195   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:11.005534   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:11.005597   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:11.505120   32280 type.go:168] "Request Body" body=""
	I1002 20:17:11.505189   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:11.505524   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:12.005153   32280 type.go:168] "Request Body" body=""
	I1002 20:17:12.005225   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:12.005562   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:12.505110   32280 type.go:168] "Request Body" body=""
	I1002 20:17:12.505174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:12.505532   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:13.005106   32280 type.go:168] "Request Body" body=""
	I1002 20:17:13.005174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:13.005476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:13.505007   32280 type.go:168] "Request Body" body=""
	I1002 20:17:13.505068   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:13.505435   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:13.505488   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:14.005005   32280 type.go:168] "Request Body" body=""
	I1002 20:17:14.005066   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:14.005383   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:14.505172   32280 type.go:168] "Request Body" body=""
	I1002 20:17:14.505244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:14.505573   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:15.005134   32280 type.go:168] "Request Body" body=""
	I1002 20:17:15.005205   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:15.005503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:15.505066   32280 type.go:168] "Request Body" body=""
	I1002 20:17:15.505141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:15.505446   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:15.505511   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:16.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:17:16.005080   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:16.005386   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:16.504935   32280 type.go:168] "Request Body" body=""
	I1002 20:17:16.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:16.505327   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:17.004855   32280 type.go:168] "Request Body" body=""
	I1002 20:17:17.004919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:17.005223   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:17.505899   32280 type.go:168] "Request Body" body=""
	I1002 20:17:17.505967   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:17.506302   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:17.506357   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:18.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:18.004943   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:18.005245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:18.504839   32280 type.go:168] "Request Body" body=""
	I1002 20:17:18.504915   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:18.505232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:19.005865   32280 type.go:168] "Request Body" body=""
	I1002 20:17:19.005947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:19.006269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:19.505022   32280 type.go:168] "Request Body" body=""
	I1002 20:17:19.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:19.505407   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:20.004991   32280 type.go:168] "Request Body" body=""
	I1002 20:17:20.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:20.005405   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:20.005466   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:20.505228   32280 type.go:168] "Request Body" body=""
	I1002 20:17:20.505297   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:20.505591   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:21.005210   32280 type.go:168] "Request Body" body=""
	I1002 20:17:21.005276   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:21.005584   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:21.505147   32280 type.go:168] "Request Body" body=""
	I1002 20:17:21.505208   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:21.505526   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:22.005059   32280 type.go:168] "Request Body" body=""
	I1002 20:17:22.005124   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:22.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:22.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:17:22.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:22.505347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:22.505407   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:23.004930   32280 type.go:168] "Request Body" body=""
	I1002 20:17:23.005006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:23.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:23.504881   32280 type.go:168] "Request Body" body=""
	I1002 20:17:23.504945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:23.505245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:24.005892   32280 type.go:168] "Request Body" body=""
	I1002 20:17:24.005969   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:24.006315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:24.505033   32280 type.go:168] "Request Body" body=""
	I1002 20:17:24.505105   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:24.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:24.505472   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:25.004948   32280 type.go:168] "Request Body" body=""
	I1002 20:17:25.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:25.005380   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:25.504947   32280 type.go:168] "Request Body" body=""
	I1002 20:17:25.505016   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:25.505308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:26.004843   32280 type.go:168] "Request Body" body=""
	I1002 20:17:26.004909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:26.005238   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:26.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:17:26.504873   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:26.505173   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:27.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:17:27.005931   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:27.006247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:27.006305   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:27.505850   32280 type.go:168] "Request Body" body=""
	I1002 20:17:27.505914   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:27.506242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:28.004933   32280 type.go:168] "Request Body" body=""
	I1002 20:17:28.005009   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:28.005342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:28.504866   32280 type.go:168] "Request Body" body=""
	I1002 20:17:28.505005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:28.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:29.004896   32280 type.go:168] "Request Body" body=""
	I1002 20:17:29.004966   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:29.005261   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:29.505004   32280 type.go:168] "Request Body" body=""
	I1002 20:17:29.505069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:29.505365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:29.505422   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:30.004927   32280 type.go:168] "Request Body" body=""
	I1002 20:17:30.004988   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:30.005290   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:30.504959   32280 type.go:168] "Request Body" body=""
	I1002 20:17:30.505027   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:30.505340   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:31.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:31.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:31.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:31.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:17:31.504956   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:31.505260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:32.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:32.004950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:32.005251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:32.005312   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:32.505895   32280 type.go:168] "Request Body" body=""
	I1002 20:17:32.505961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:32.506274   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:33.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:33.004958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:33.005280   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:33.504821   32280 type.go:168] "Request Body" body=""
	I1002 20:17:33.504892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:33.505232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:34.005931   32280 type.go:168] "Request Body" body=""
	I1002 20:17:34.006061   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:34.006376   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:34.006427   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:34.505046   32280 type.go:168] "Request Body" body=""
	I1002 20:17:34.505112   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:34.505397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:35.004981   32280 type.go:168] "Request Body" body=""
	I1002 20:17:35.005045   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:35.005370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:35.504929   32280 type.go:168] "Request Body" body=""
	I1002 20:17:35.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:35.505318   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:36.004980   32280 type.go:168] "Request Body" body=""
	I1002 20:17:36.005058   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:36.005394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:36.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:17:36.505060   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:36.505342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:36.505398   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:37.004903   32280 type.go:168] "Request Body" body=""
	I1002 20:17:37.004978   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:37.005282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:37.504878   32280 type.go:168] "Request Body" body=""
	I1002 20:17:37.504942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:37.505231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:38.005855   32280 type.go:168] "Request Body" body=""
	I1002 20:17:38.005918   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:38.006208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:38.505835   32280 type.go:168] "Request Body" body=""
	I1002 20:17:38.505904   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:38.506229   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:38.506296   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:39.004853   32280 type.go:168] "Request Body" body=""
	I1002 20:17:39.004944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:39.005263   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:39.505135   32280 type.go:168] "Request Body" body=""
	I1002 20:17:39.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:39.505615   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:40.005193   32280 type.go:168] "Request Body" body=""
	I1002 20:17:40.005282   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:40.005581   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:40.505135   32280 type.go:168] "Request Body" body=""
	I1002 20:17:40.505207   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:40.505537   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:41.005103   32280 type.go:168] "Request Body" body=""
	I1002 20:17:41.005165   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:41.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:41.005563   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:41.505063   32280 type.go:168] "Request Body" body=""
	I1002 20:17:41.505150   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:41.505490   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:42.005054   32280 type.go:168] "Request Body" body=""
	I1002 20:17:42.005160   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:42.005471   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:42.505019   32280 type.go:168] "Request Body" body=""
	I1002 20:17:42.505084   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:42.505402   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:43.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:43.005022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:43.005350   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:43.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:17:43.505007   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:43.505339   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:43.505393   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:44.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:17:44.005006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:44.005323   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:44.505096   32280 type.go:168] "Request Body" body=""
	I1002 20:17:44.505171   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:44.505478   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:45.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:17:45.005090   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:45.005399   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:45.504952   32280 type.go:168] "Request Body" body=""
	I1002 20:17:45.505012   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:45.505310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:46.004864   32280 type.go:168] "Request Body" body=""
	I1002 20:17:46.004951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:46.005294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:46.005355   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:46.504873   32280 type.go:168] "Request Body" body=""
	I1002 20:17:46.504940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:46.505244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:47.005848   32280 type.go:168] "Request Body" body=""
	I1002 20:17:47.005930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:47.006252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:47.504816   32280 type.go:168] "Request Body" body=""
	I1002 20:17:47.504905   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:47.505215   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:48.005846   32280 type.go:168] "Request Body" body=""
	I1002 20:17:48.005933   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:48.006242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:48.006300   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:48.505916   32280 type.go:168] "Request Body" body=""
	I1002 20:17:48.505980   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:48.506270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:49.004828   32280 type.go:168] "Request Body" body=""
	I1002 20:17:49.004910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:49.005240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:49.504935   32280 type.go:168] "Request Body" body=""
	I1002 20:17:49.505024   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:49.505373   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:50.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:50.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:50.005340   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:50.505078   32280 type.go:168] "Request Body" body=""
	I1002 20:17:50.505147   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:50.505479   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:50.505532   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:51.005024   32280 type.go:168] "Request Body" body=""
	I1002 20:17:51.005103   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:51.005420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:51.504998   32280 type.go:168] "Request Body" body=""
	I1002 20:17:51.505075   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:51.505410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:52.005000   32280 type.go:168] "Request Body" body=""
	I1002 20:17:52.005081   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:52.005428   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:52.505012   32280 type.go:168] "Request Body" body=""
	I1002 20:17:52.505100   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:52.505419   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:53.005015   32280 type.go:168] "Request Body" body=""
	I1002 20:17:53.005100   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:53.005438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:53.005495   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:53.504988   32280 type.go:168] "Request Body" body=""
	I1002 20:17:53.505059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:53.505385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:54.004971   32280 type.go:168] "Request Body" body=""
	I1002 20:17:54.005048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:54.005388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:54.505199   32280 type.go:168] "Request Body" body=""
	I1002 20:17:54.505286   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:54.505624   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:55.005199   32280 type.go:168] "Request Body" body=""
	I1002 20:17:55.005287   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:55.005639   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:55.005734   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:55.505238   32280 type.go:168] "Request Body" body=""
	I1002 20:17:55.505303   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:55.505621   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:56.005174   32280 type.go:168] "Request Body" body=""
	I1002 20:17:56.005258   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:56.005612   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:56.505166   32280 type.go:168] "Request Body" body=""
	I1002 20:17:56.505231   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:56.505523   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:57.005076   32280 type.go:168] "Request Body" body=""
	I1002 20:17:57.005156   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:57.005631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:57.505096   32280 type.go:168] "Request Body" body=""
	I1002 20:17:57.505167   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:57.505488   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:57.505554   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:58.005160   32280 type.go:168] "Request Body" body=""
	I1002 20:17:58.005227   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:58.005552   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:58.505084   32280 type.go:168] "Request Body" body=""
	I1002 20:17:58.505166   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:58.505512   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:59.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:17:59.005138   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:59.005430   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:59.505390   32280 type.go:168] "Request Body" body=""
	I1002 20:17:59.505459   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:59.505823   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:59.505890   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:00.005468   32280 type.go:168] "Request Body" body=""
	I1002 20:18:00.005540   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:00.005877   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:00.505768   32280 type.go:168] "Request Body" body=""
	I1002 20:18:00.505843   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:00.506177   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:01.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:18:01.005945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:01.006334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:01.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:18:01.504996   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:01.505321   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:02.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:18:02.005017   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:02.005334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:02.005385   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:02.504884   32280 type.go:168] "Request Body" body=""
	I1002 20:18:02.504963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:02.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:03.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:18:03.005005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:03.005356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:03.504932   32280 type.go:168] "Request Body" body=""
	I1002 20:18:03.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:03.505307   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:04.004878   32280 type.go:168] "Request Body" body=""
	I1002 20:18:04.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:04.005291   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:04.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:18:04.505131   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:04.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:04.505520   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:05.005008   32280 type.go:168] "Request Body" body=""
	I1002 20:18:05.005088   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:05.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:05.504977   32280 type.go:168] "Request Body" body=""
	I1002 20:18:05.505046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:05.505355   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:06.004890   32280 type.go:168] "Request Body" body=""
	I1002 20:18:06.004955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:06.005271   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:06.505878   32280 type.go:168] "Request Body" body=""
	I1002 20:18:06.505942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:06.506244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:06.506297   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:07.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:18:07.005943   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:07.006253   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:07.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:18:07.504964   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:07.505251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:08.004916   32280 type.go:168] "Request Body" body=""
	I1002 20:18:08.004981   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:08.005306   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:08.504856   32280 type.go:168] "Request Body" body=""
	I1002 20:18:08.504941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:08.505239   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:09.005880   32280 type.go:168] "Request Body" body=""
	I1002 20:18:09.005952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:09.006285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:09.006339   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:09.505080   32280 type.go:168] "Request Body" body=""
	I1002 20:18:09.505146   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:09.505447   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:10.005082   32280 type.go:168] "Request Body" body=""
	I1002 20:18:10.005147   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:10.005473   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:10.505242   32280 type.go:168] "Request Body" body=""
	I1002 20:18:10.505307   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:10.505606   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:11.005169   32280 type.go:168] "Request Body" body=""
	I1002 20:18:11.005243   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:11.005570   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:11.505121   32280 type.go:168] "Request Body" body=""
	I1002 20:18:11.505186   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:11.505487   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:11.505538   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:12.005071   32280 type.go:168] "Request Body" body=""
	I1002 20:18:12.005141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:12.005461   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:12.505817   32280 type.go:168] "Request Body" body=""
	I1002 20:18:12.505883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:12.506177   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:13.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:18:13.005887   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:13.006211   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:13.505873   32280 type.go:168] "Request Body" body=""
	I1002 20:18:13.505942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:13.506236   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:13.506287   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:14.004813   32280 type.go:168] "Request Body" body=""
	I1002 20:18:14.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:14.005208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:14.505838   32280 type.go:168] "Request Body" body=""
	I1002 20:18:14.505912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:14.506225   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:15.005871   32280 type.go:168] "Request Body" body=""
	I1002 20:18:15.005949   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:15.006278   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:15.504830   32280 type.go:168] "Request Body" body=""
	I1002 20:18:15.504900   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:15.505190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:16.004845   32280 type.go:168] "Request Body" body=""
	I1002 20:18:16.004935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:16.005267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:16.005321   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:16.504844   32280 type.go:168] "Request Body" body=""
	I1002 20:18:16.504915   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:16.505208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:17.004848   32280 type.go:168] "Request Body" body=""
	I1002 20:18:17.005199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:17.005523   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:17.505033   32280 type.go:168] "Request Body" body=""
	I1002 20:18:17.505107   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:17.505434   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:18.004982   32280 type.go:168] "Request Body" body=""
	I1002 20:18:18.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:18.005443   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:18.005498   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:18.505161   32280 type.go:168] "Request Body" body=""
	I1002 20:18:18.505228   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:18.505530   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:19.005238   32280 type.go:168] "Request Body" body=""
	I1002 20:18:19.005302   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:19.005626   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:19.505401   32280 type.go:168] "Request Body" body=""
	I1002 20:18:19.505466   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:19.505798   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:20.005591   32280 type.go:168] "Request Body" body=""
	I1002 20:18:20.005673   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:20.006000   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:20.006051   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:20.505823   32280 type.go:168] "Request Body" body=""
	I1002 20:18:20.505886   32280 node_ready.go:38] duration metric: took 6m0.001160736s for node "functional-753218" to be "Ready" ...
	I1002 20:18:20.508034   32280 out.go:203] 
	W1002 20:18:20.509328   32280 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:18:20.509341   32280 out.go:285] * 
	* 
	W1002 20:18:20.511008   32280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:18:20.512144   32280 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-753218 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.819968005s for "functional-753218" cluster.
I1002 20:18:20.941871   12851 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (301.491105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-961266                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-961266   │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ start   │ --download-only -p download-docker-213285 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-213285 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ -p download-docker-213285                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-213285 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ start   │ --download-only -p binary-mirror-331754 --alsologtostderr --binary-mirror http://127.0.0.1:42675 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-331754   │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ -p binary-mirror-331754                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-331754   │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ addons  │ disable dashboard -p addons-486748                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ addons  │ enable dashboard -p addons-486748                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ start   │ -p addons-486748 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ -p addons-486748                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:55 UTC │ 02 Oct 25 19:55 UTC │
	│ start   │ -p nospam-547008 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-547008 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 19:55 UTC │                     │
	│ start   │ nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ delete  │ -p nospam-547008                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ start   │ -p functional-753218 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-753218      │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ -p functional-753218 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-753218      │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:12:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:12:14.161053   32280 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:12:14.161314   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:12:14.161324   32280 out.go:374] Setting ErrFile to fd 2...
	I1002 20:12:14.161329   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:12:14.161525   32280 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:12:14.161965   32280 out.go:368] Setting JSON to false
	I1002 20:12:14.162918   32280 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3283,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:12:14.163001   32280 start.go:140] virtualization: kvm guest
	I1002 20:12:14.165258   32280 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:12:14.166596   32280 notify.go:221] Checking for updates...
	I1002 20:12:14.166661   32280 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:12:14.168151   32280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:12:14.169781   32280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:14.170964   32280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:12:14.172159   32280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:12:14.173393   32280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:12:14.175005   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:14.175089   32280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:12:14.198042   32280 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:12:14.198110   32280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:12:14.249812   32280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:12:14.240278836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:12:14.249943   32280 docker.go:319] overlay module found
	I1002 20:12:14.251744   32280 out.go:179] * Using the docker driver based on existing profile
	I1002 20:12:14.252771   32280 start.go:306] selected driver: docker
	I1002 20:12:14.252788   32280 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:14.252894   32280 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:12:14.253012   32280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:12:14.302717   32280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:12:14.29341416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:12:14.303277   32280 cni.go:84] Creating CNI manager for ""
	I1002 20:12:14.303332   32280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:12:14.303374   32280 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:14.305248   32280 out.go:179] * Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	I1002 20:12:14.306703   32280 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:12:14.308110   32280 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:12:14.309231   32280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:12:14.309270   32280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:12:14.309292   32280 cache.go:59] Caching tarball of preloaded images
	I1002 20:12:14.309321   32280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:12:14.309392   32280 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:12:14.309404   32280 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:12:14.309506   32280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:12:14.328595   32280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:12:14.328612   32280 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:12:14.328641   32280 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:12:14.328685   32280 start.go:361] acquireMachinesLock for functional-753218: {Name:mk742badf6f1dbafca8397e398143e758831ae3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:12:14.328749   32280 start.go:365] duration metric: took 40.346µs to acquireMachinesLock for "functional-753218"
	I1002 20:12:14.328768   32280 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:12:14.328773   32280 fix.go:55] fixHost starting: 
	I1002 20:12:14.328978   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:14.345315   32280 fix.go:113] recreateIfNeeded on functional-753218: state=Running err=<nil>
	W1002 20:12:14.345339   32280 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:12:14.347103   32280 out.go:252] * Updating the running docker "functional-753218" container ...
	I1002 20:12:14.347127   32280 machine.go:93] provisionDockerMachine start ...
	I1002 20:12:14.347175   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.364778   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.365056   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.365071   32280 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:12:14.506481   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:12:14.506514   32280 ubuntu.go:182] provisioning hostname "functional-753218"
	I1002 20:12:14.506576   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.523646   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.523886   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.523904   32280 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753218 && echo "functional-753218" | sudo tee /etc/hostname
	I1002 20:12:14.674327   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:12:14.674412   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.691957   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.692191   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.692210   32280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753218/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:12:14.834109   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:12:14.834144   32280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:12:14.834205   32280 ubuntu.go:190] setting up certificates
	I1002 20:12:14.834219   32280 provision.go:84] configureAuth start
	I1002 20:12:14.834287   32280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:12:14.852021   32280 provision.go:143] copyHostCerts
	I1002 20:12:14.852056   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:12:14.852091   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:12:14.852111   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:12:14.852184   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:12:14.852289   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:12:14.852315   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:12:14.852322   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:12:14.852367   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:12:14.852431   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:12:14.852454   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:12:14.852460   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:12:14.852497   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:12:14.852565   32280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.functional-753218 san=[127.0.0.1 192.168.49.2 functional-753218 localhost minikube]
	I1002 20:12:14.908205   32280 provision.go:177] copyRemoteCerts
	I1002 20:12:14.908265   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:12:14.908316   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.925225   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.025356   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:12:15.025415   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:12:15.042012   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:12:15.042068   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:12:15.059080   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:12:15.059140   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:12:15.075501   32280 provision.go:87] duration metric: took 241.264617ms to configureAuth
	I1002 20:12:15.075530   32280 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:12:15.075723   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:15.075835   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.092499   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:15.092718   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:15.092740   32280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:12:15.350871   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:12:15.350899   32280 machine.go:96] duration metric: took 1.003764785s to provisionDockerMachine
	I1002 20:12:15.350913   32280 start.go:294] postStartSetup for "functional-753218" (driver="docker")
	I1002 20:12:15.350926   32280 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:12:15.350976   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:12:15.351010   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.368192   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.468976   32280 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:12:15.472512   32280 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 20:12:15.472527   32280 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 20:12:15.472540   32280 command_runner.go:130] > VERSION_ID="12"
	I1002 20:12:15.472545   32280 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 20:12:15.472553   32280 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 20:12:15.472556   32280 command_runner.go:130] > ID=debian
	I1002 20:12:15.472560   32280 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 20:12:15.472565   32280 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 20:12:15.472572   32280 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 20:12:15.472618   32280 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:12:15.472635   32280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:12:15.472666   32280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:12:15.472731   32280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:12:15.472806   32280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:12:15.472815   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:12:15.472889   32280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> hosts in /etc/test/nested/copy/12851
	I1002 20:12:15.472896   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> /etc/test/nested/copy/12851/hosts
	I1002 20:12:15.472925   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12851
	I1002 20:12:15.480384   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:12:15.496865   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts --> /etc/test/nested/copy/12851/hosts (40 bytes)
	I1002 20:12:15.513635   32280 start.go:297] duration metric: took 162.708522ms for postStartSetup
	I1002 20:12:15.513745   32280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:12:15.513794   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.530644   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.628445   32280 command_runner.go:130] > 39%
	I1002 20:12:15.628745   32280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:12:15.633076   32280 command_runner.go:130] > 179G
	I1002 20:12:15.633306   32280 fix.go:57] duration metric: took 1.304525715s for fixHost
	I1002 20:12:15.633325   32280 start.go:84] releasing machines lock for "functional-753218", held for 1.30456494s
	I1002 20:12:15.633398   32280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:12:15.650579   32280 ssh_runner.go:195] Run: cat /version.json
	I1002 20:12:15.650618   32280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:12:15.650631   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.650688   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.668938   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.669107   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.765770   32280 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 20:12:15.817112   32280 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 20:12:15.819166   32280 ssh_runner.go:195] Run: systemctl --version
	I1002 20:12:15.825335   32280 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 20:12:15.825364   32280 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 20:12:15.825559   32280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:12:15.861701   32280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 20:12:15.866192   32280 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 20:12:15.866262   32280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:12:15.866323   32280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:12:15.874084   32280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:12:15.874106   32280 start.go:496] detecting cgroup driver to use...
	I1002 20:12:15.874141   32280 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:12:15.874206   32280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:12:15.887803   32280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:12:15.899530   32280 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:12:15.899588   32280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:12:15.913378   32280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:12:15.925494   32280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:12:16.013036   32280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:12:16.099049   32280 docker.go:234] disabling docker service ...
	I1002 20:12:16.099135   32280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:12:16.112698   32280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:12:16.124592   32280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:12:16.212924   32280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:12:16.298302   32280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:12:16.310529   32280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:12:16.324186   32280 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 20:12:16.324212   32280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:12:16.324248   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.332999   32280 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:12:16.333067   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.341758   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.350162   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.358406   32280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:12:16.365887   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.374465   32280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.382513   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.390861   32280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:12:16.397800   32280 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 20:12:16.397864   32280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:12:16.404831   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:16.487603   32280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:12:19.404809   32280 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.917172928s)
	I1002 20:12:19.404840   32280 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:12:19.404889   32280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:12:19.408896   32280 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 20:12:19.408924   32280 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 20:12:19.408935   32280 command_runner.go:130] > Device: 0,59	Inode: 3843        Links: 1
	I1002 20:12:19.408947   32280 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:12:19.408956   32280 command_runner.go:130] > Access: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408964   32280 command_runner.go:130] > Modify: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408977   32280 command_runner.go:130] > Change: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408989   32280 command_runner.go:130] >  Birth: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.409044   32280 start.go:564] Will wait 60s for crictl version
	I1002 20:12:19.409092   32280 ssh_runner.go:195] Run: which crictl
	I1002 20:12:19.412689   32280 command_runner.go:130] > /usr/local/bin/crictl
	I1002 20:12:19.412744   32280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:12:19.436957   32280 command_runner.go:130] > Version:  0.1.0
	I1002 20:12:19.436979   32280 command_runner.go:130] > RuntimeName:  cri-o
	I1002 20:12:19.436984   32280 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 20:12:19.436989   32280 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 20:12:19.437005   32280 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:12:19.437072   32280 ssh_runner.go:195] Run: crio --version
	I1002 20:12:19.464211   32280 command_runner.go:130] > crio version 1.34.1
	I1002 20:12:19.464228   32280 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:12:19.464234   32280 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:12:19.464240   32280 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:12:19.464244   32280 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:12:19.464248   32280 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:12:19.464252   32280 command_runner.go:130] >    Compiler:       gc
	I1002 20:12:19.464257   32280 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:12:19.464261   32280 command_runner.go:130] >    Linkmode:       static
	I1002 20:12:19.464264   32280 command_runner.go:130] >    BuildTags:
	I1002 20:12:19.464267   32280 command_runner.go:130] >      static
	I1002 20:12:19.464275   32280 command_runner.go:130] >      netgo
	I1002 20:12:19.464279   32280 command_runner.go:130] >      osusergo
	I1002 20:12:19.464283   32280 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:12:19.464288   32280 command_runner.go:130] >      seccomp
	I1002 20:12:19.464291   32280 command_runner.go:130] >      apparmor
	I1002 20:12:19.464298   32280 command_runner.go:130] >      selinux
	I1002 20:12:19.464302   32280 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:12:19.464306   32280 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:12:19.464310   32280 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:12:19.464385   32280 ssh_runner.go:195] Run: crio --version
	I1002 20:12:19.491564   32280 command_runner.go:130] > crio version 1.34.1
	I1002 20:12:19.491590   32280 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:12:19.491596   32280 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:12:19.491601   32280 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:12:19.491605   32280 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:12:19.491609   32280 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:12:19.491612   32280 command_runner.go:130] >    Compiler:       gc
	I1002 20:12:19.491619   32280 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:12:19.491623   32280 command_runner.go:130] >    Linkmode:       static
	I1002 20:12:19.491627   32280 command_runner.go:130] >    BuildTags:
	I1002 20:12:19.491630   32280 command_runner.go:130] >      static
	I1002 20:12:19.491634   32280 command_runner.go:130] >      netgo
	I1002 20:12:19.491637   32280 command_runner.go:130] >      osusergo
	I1002 20:12:19.491641   32280 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:12:19.491665   32280 command_runner.go:130] >      seccomp
	I1002 20:12:19.491671   32280 command_runner.go:130] >      apparmor
	I1002 20:12:19.491681   32280 command_runner.go:130] >      selinux
	I1002 20:12:19.491687   32280 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:12:19.491700   32280 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:12:19.491719   32280 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:12:19.493718   32280 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:12:19.495253   32280 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:12:19.512253   32280 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:12:19.516262   32280 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 20:12:19.516341   32280 kubeadm.go:883] updating cluster {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:12:19.516485   32280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:12:19.516543   32280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:12:19.546693   32280 command_runner.go:130] > {
	I1002 20:12:19.546715   32280 command_runner.go:130] >   "images":  [
	I1002 20:12:19.546721   32280 command_runner.go:130] >     {
	I1002 20:12:19.546728   32280 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:12:19.546732   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.546739   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:12:19.546745   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546774   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.546794   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:12:19.546808   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:12:19.546815   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546819   32280 command_runner.go:130] >       "size":  "109379124",
	I1002 20:12:19.546826   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.546835   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.546843   32280 command_runner.go:130] >     },
	I1002 20:12:19.546850   32280 command_runner.go:130] >     {
	I1002 20:12:19.546862   32280 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:12:19.546873   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.546881   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:12:19.546890   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546896   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.546909   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:12:19.546920   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:12:19.546937   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546947   32280 command_runner.go:130] >       "size":  "31470524",
	I1002 20:12:19.546954   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.546966   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.546972   32280 command_runner.go:130] >     },
	I1002 20:12:19.546979   32280 command_runner.go:130] >     {
	I1002 20:12:19.546989   32280 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:12:19.547010   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547022   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:12:19.547032   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547039   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547053   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:12:19.547065   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:12:19.547073   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547080   32280 command_runner.go:130] >       "size":  "76103547",
	I1002 20:12:19.547087   32280 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:12:19.547091   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547094   32280 command_runner.go:130] >     },
	I1002 20:12:19.547100   32280 command_runner.go:130] >     {
	I1002 20:12:19.547113   32280 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:12:19.547119   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547129   32280 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:12:19.547135   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547144   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547154   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:12:19.547167   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:12:19.547176   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547182   32280 command_runner.go:130] >       "size":  "195976448",
	I1002 20:12:19.547187   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547192   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547201   32280 command_runner.go:130] >       },
	I1002 20:12:19.547217   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547228   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547233   32280 command_runner.go:130] >     },
	I1002 20:12:19.547242   32280 command_runner.go:130] >     {
	I1002 20:12:19.547252   32280 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:12:19.547261   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547269   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:12:19.547276   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547281   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547301   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:12:19.547316   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:12:19.547321   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547331   32280 command_runner.go:130] >       "size":  "89046001",
	I1002 20:12:19.547337   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547346   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547352   32280 command_runner.go:130] >       },
	I1002 20:12:19.547361   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547368   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547376   32280 command_runner.go:130] >     },
	I1002 20:12:19.547380   32280 command_runner.go:130] >     {
	I1002 20:12:19.547390   32280 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:12:19.547396   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547407   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:12:19.547413   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547423   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547435   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:12:19.547451   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:12:19.547459   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547466   32280 command_runner.go:130] >       "size":  "76004181",
	I1002 20:12:19.547474   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547480   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547489   32280 command_runner.go:130] >       },
	I1002 20:12:19.547495   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547507   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547512   32280 command_runner.go:130] >     },
	I1002 20:12:19.547517   32280 command_runner.go:130] >     {
	I1002 20:12:19.547527   32280 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:12:19.547534   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547541   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:12:19.547546   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547552   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547561   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:12:19.547582   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:12:19.547592   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547599   32280 command_runner.go:130] >       "size":  "73138073",
	I1002 20:12:19.547606   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547615   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547624   32280 command_runner.go:130] >     },
	I1002 20:12:19.547629   32280 command_runner.go:130] >     {
	I1002 20:12:19.547641   32280 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:12:19.547658   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547667   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:12:19.547673   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547683   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547693   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:12:19.547720   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:12:19.547729   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547733   32280 command_runner.go:130] >       "size":  "53844823",
	I1002 20:12:19.547737   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547743   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547752   32280 command_runner.go:130] >       },
	I1002 20:12:19.547758   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547768   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547775   32280 command_runner.go:130] >     },
	I1002 20:12:19.547782   32280 command_runner.go:130] >     {
	I1002 20:12:19.547794   32280 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:12:19.547804   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547814   32280 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.547820   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547825   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547839   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:12:19.547853   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:12:19.547861   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547867   32280 command_runner.go:130] >       "size":  "742092",
	I1002 20:12:19.547876   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547887   32280 command_runner.go:130] >         "value":  "65535"
	I1002 20:12:19.547894   32280 command_runner.go:130] >       },
	I1002 20:12:19.547900   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547906   32280 command_runner.go:130] >       "pinned":  true
	I1002 20:12:19.547910   32280 command_runner.go:130] >     }
	I1002 20:12:19.547917   32280 command_runner.go:130] >   ]
	I1002 20:12:19.547924   32280 command_runner.go:130] > }
	I1002 20:12:19.548472   32280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:12:19.548485   32280 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:12:19.548524   32280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:12:19.570809   32280 command_runner.go:130] > {
	I1002 20:12:19.570828   32280 command_runner.go:130] >   "images":  [
	I1002 20:12:19.570831   32280 command_runner.go:130] >     {
	I1002 20:12:19.570839   32280 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:12:19.570844   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.570849   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:12:19.570853   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570857   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.570864   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:12:19.570871   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:12:19.570877   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570882   32280 command_runner.go:130] >       "size":  "109379124",
	I1002 20:12:19.570889   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.570902   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.570908   32280 command_runner.go:130] >     },
	I1002 20:12:19.570914   32280 command_runner.go:130] >     {
	I1002 20:12:19.570922   32280 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:12:19.570928   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.570932   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:12:19.570938   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570941   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.570948   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:12:19.570958   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:12:19.570964   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570971   32280 command_runner.go:130] >       "size":  "31470524",
	I1002 20:12:19.570976   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.570985   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.570990   32280 command_runner.go:130] >     },
	I1002 20:12:19.570993   32280 command_runner.go:130] >     {
	I1002 20:12:19.571001   32280 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:12:19.571005   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571012   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:12:19.571016   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571021   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571028   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:12:19.571037   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:12:19.571043   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571047   32280 command_runner.go:130] >       "size":  "76103547",
	I1002 20:12:19.571050   32280 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:12:19.571056   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571059   32280 command_runner.go:130] >     },
	I1002 20:12:19.571065   32280 command_runner.go:130] >     {
	I1002 20:12:19.571071   32280 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:12:19.571077   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571081   32280 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:12:19.571087   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571091   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571099   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:12:19.571108   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:12:19.571113   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571117   32280 command_runner.go:130] >       "size":  "195976448",
	I1002 20:12:19.571122   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571126   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571132   32280 command_runner.go:130] >       },
	I1002 20:12:19.571139   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571145   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571152   32280 command_runner.go:130] >     },
	I1002 20:12:19.571157   32280 command_runner.go:130] >     {
	I1002 20:12:19.571163   32280 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:12:19.571169   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571173   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:12:19.571179   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571183   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571192   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:12:19.571201   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:12:19.571207   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571211   32280 command_runner.go:130] >       "size":  "89046001",
	I1002 20:12:19.571216   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571220   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571226   32280 command_runner.go:130] >       },
	I1002 20:12:19.571231   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571234   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571237   32280 command_runner.go:130] >     },
	I1002 20:12:19.571242   32280 command_runner.go:130] >     {
	I1002 20:12:19.571249   32280 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:12:19.571255   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571260   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:12:19.571265   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571269   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571276   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:12:19.571286   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:12:19.571292   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571295   32280 command_runner.go:130] >       "size":  "76004181",
	I1002 20:12:19.571301   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571305   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571310   32280 command_runner.go:130] >       },
	I1002 20:12:19.571314   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571318   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571323   32280 command_runner.go:130] >     },
	I1002 20:12:19.571327   32280 command_runner.go:130] >     {
	I1002 20:12:19.571335   32280 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:12:19.571339   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571349   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:12:19.571355   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571359   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571367   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:12:19.571376   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:12:19.571382   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571386   32280 command_runner.go:130] >       "size":  "73138073",
	I1002 20:12:19.571393   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571397   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571402   32280 command_runner.go:130] >     },
	I1002 20:12:19.571405   32280 command_runner.go:130] >     {
	I1002 20:12:19.571410   32280 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:12:19.571414   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571418   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:12:19.571422   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571425   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571431   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:12:19.571446   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:12:19.571455   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571461   32280 command_runner.go:130] >       "size":  "53844823",
	I1002 20:12:19.571469   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571474   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571482   32280 command_runner.go:130] >       },
	I1002 20:12:19.571488   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571495   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571498   32280 command_runner.go:130] >     },
	I1002 20:12:19.571504   32280 command_runner.go:130] >     {
	I1002 20:12:19.571510   32280 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:12:19.571516   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571520   32280 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.571526   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571530   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571542   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:12:19.571552   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:12:19.571556   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571562   32280 command_runner.go:130] >       "size":  "742092",
	I1002 20:12:19.571565   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571571   32280 command_runner.go:130] >         "value":  "65535"
	I1002 20:12:19.571575   32280 command_runner.go:130] >       },
	I1002 20:12:19.571581   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571585   32280 command_runner.go:130] >       "pinned":  true
	I1002 20:12:19.571590   32280 command_runner.go:130] >     }
	I1002 20:12:19.571593   32280 command_runner.go:130] >   ]
	I1002 20:12:19.571598   32280 command_runner.go:130] > }
	I1002 20:12:19.572597   32280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:12:19.572614   32280 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:12:19.572621   32280 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:12:19.572734   32280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:12:19.572796   32280 ssh_runner.go:195] Run: crio config
	I1002 20:12:19.612615   32280 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 20:12:19.612638   32280 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 20:12:19.612664   32280 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 20:12:19.612669   32280 command_runner.go:130] > #
	I1002 20:12:19.612689   32280 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 20:12:19.612698   32280 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 20:12:19.612709   32280 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 20:12:19.612721   32280 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 20:12:19.612728   32280 command_runner.go:130] > # reload'.
	I1002 20:12:19.612738   32280 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 20:12:19.612748   32280 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 20:12:19.612758   32280 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 20:12:19.612768   32280 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 20:12:19.612773   32280 command_runner.go:130] > [crio]
	I1002 20:12:19.612785   32280 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 20:12:19.612796   32280 command_runner.go:130] > # containers images, in this directory.
	I1002 20:12:19.612808   32280 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 20:12:19.612821   32280 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 20:12:19.612828   32280 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 20:12:19.612841   32280 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 20:12:19.612855   32280 command_runner.go:130] > # imagestore = ""
	I1002 20:12:19.612864   32280 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 20:12:19.612878   32280 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 20:12:19.612885   32280 command_runner.go:130] > # storage_driver = "overlay"
	I1002 20:12:19.612895   32280 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 20:12:19.612905   32280 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 20:12:19.612914   32280 command_runner.go:130] > # storage_option = [
	I1002 20:12:19.612917   32280 command_runner.go:130] > # ]
	I1002 20:12:19.612923   32280 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 20:12:19.612931   32280 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 20:12:19.612941   32280 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 20:12:19.612950   32280 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 20:12:19.612959   32280 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 20:12:19.612970   32280 command_runner.go:130] > # always happen on a node reboot
	I1002 20:12:19.612977   32280 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 20:12:19.612994   32280 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 20:12:19.613004   32280 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 20:12:19.613009   32280 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 20:12:19.613016   32280 command_runner.go:130] > # version_file_persist = ""
	I1002 20:12:19.613025   32280 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 20:12:19.613033   32280 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 20:12:19.613041   32280 command_runner.go:130] > # internal_wipe = true
	I1002 20:12:19.613054   32280 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 20:12:19.613066   32280 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 20:12:19.613075   32280 command_runner.go:130] > # internal_repair = true
	I1002 20:12:19.613083   32280 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 20:12:19.613095   32280 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 20:12:19.613113   32280 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 20:12:19.613120   32280 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 20:12:19.613129   32280 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 20:12:19.613134   32280 command_runner.go:130] > [crio.api]
	I1002 20:12:19.613142   32280 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 20:12:19.613150   32280 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 20:12:19.613162   32280 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 20:12:19.613173   32280 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 20:12:19.613185   32280 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 20:12:19.613197   32280 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 20:12:19.613204   32280 command_runner.go:130] > # stream_port = "0"
	I1002 20:12:19.613213   32280 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 20:12:19.613222   32280 command_runner.go:130] > # stream_enable_tls = false
	I1002 20:12:19.613231   32280 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 20:12:19.613238   32280 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 20:12:19.613248   32280 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 20:12:19.613260   32280 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 20:12:19.613266   32280 command_runner.go:130] > # stream_tls_cert = ""
	I1002 20:12:19.613274   32280 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 20:12:19.613292   32280 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 20:12:19.613301   32280 command_runner.go:130] > # stream_tls_key = ""
	I1002 20:12:19.613309   32280 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 20:12:19.613323   32280 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 20:12:19.613331   32280 command_runner.go:130] > # automatically pick up the changes.
	I1002 20:12:19.613340   32280 command_runner.go:130] > # stream_tls_ca = ""
	I1002 20:12:19.613394   32280 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:12:19.613408   32280 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 20:12:19.613420   32280 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:12:19.613430   32280 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 20:12:19.613440   32280 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 20:12:19.613452   32280 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 20:12:19.613458   32280 command_runner.go:130] > [crio.runtime]
	I1002 20:12:19.613469   32280 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 20:12:19.613481   32280 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 20:12:19.613487   32280 command_runner.go:130] > # "nofile=1024:2048"
	I1002 20:12:19.613500   32280 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 20:12:19.613508   32280 command_runner.go:130] > # default_ulimits = [
	I1002 20:12:19.613514   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613526   32280 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 20:12:19.613532   32280 command_runner.go:130] > # no_pivot = false
	I1002 20:12:19.613543   32280 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 20:12:19.613554   32280 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 20:12:19.613564   32280 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 20:12:19.613573   32280 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 20:12:19.613584   32280 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 20:12:19.613594   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:12:19.613603   32280 command_runner.go:130] > # conmon = ""
	I1002 20:12:19.613611   32280 command_runner.go:130] > # Cgroup setting for conmon
	I1002 20:12:19.613625   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 20:12:19.613632   32280 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 20:12:19.613642   32280 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 20:12:19.613664   32280 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 20:12:19.613682   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:12:19.613692   32280 command_runner.go:130] > # conmon_env = [
	I1002 20:12:19.613698   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613710   32280 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 20:12:19.613720   32280 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 20:12:19.613729   32280 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 20:12:19.613739   32280 command_runner.go:130] > # default_env = [
	I1002 20:12:19.613746   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613758   32280 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 20:12:19.613769   32280 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 20:12:19.613778   32280 command_runner.go:130] > # selinux = false
	I1002 20:12:19.613788   32280 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 20:12:19.613803   32280 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 20:12:19.613814   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613822   32280 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:12:19.613835   32280 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 20:12:19.613846   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613852   32280 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 20:12:19.613865   32280 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 20:12:19.613878   32280 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 20:12:19.613890   32280 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 20:12:19.613899   32280 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 20:12:19.613908   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613917   32280 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 20:12:19.613926   32280 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 20:12:19.613937   32280 command_runner.go:130] > # the cgroup blockio controller.
	I1002 20:12:19.613944   32280 command_runner.go:130] > # blockio_config_file = ""
	I1002 20:12:19.613958   32280 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 20:12:19.613965   32280 command_runner.go:130] > # blockio parameters.
	I1002 20:12:19.613974   32280 command_runner.go:130] > # blockio_reload = false
	I1002 20:12:19.613983   32280 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 20:12:19.613994   32280 command_runner.go:130] > # irqbalance daemon.
	I1002 20:12:19.614002   32280 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 20:12:19.614013   32280 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 20:12:19.614023   32280 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 20:12:19.614037   32280 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 20:12:19.614048   32280 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 20:12:19.614061   32280 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 20:12:19.614068   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.614077   32280 command_runner.go:130] > # rdt_config_file = ""
	I1002 20:12:19.614085   32280 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 20:12:19.614095   32280 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 20:12:19.614104   32280 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 20:12:19.614113   32280 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 20:12:19.614127   32280 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 20:12:19.614139   32280 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 20:12:19.614147   32280 command_runner.go:130] > # will be added.
	I1002 20:12:19.614155   32280 command_runner.go:130] > # default_capabilities = [
	I1002 20:12:19.614163   32280 command_runner.go:130] > # 	"CHOWN",
	I1002 20:12:19.614170   32280 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 20:12:19.614177   32280 command_runner.go:130] > # 	"FSETID",
	I1002 20:12:19.614182   32280 command_runner.go:130] > # 	"FOWNER",
	I1002 20:12:19.614187   32280 command_runner.go:130] > # 	"SETGID",
	I1002 20:12:19.614210   32280 command_runner.go:130] > # 	"SETUID",
	I1002 20:12:19.614214   32280 command_runner.go:130] > # 	"SETPCAP",
	I1002 20:12:19.614219   32280 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 20:12:19.614223   32280 command_runner.go:130] > # 	"KILL",
	I1002 20:12:19.614227   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614236   32280 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 20:12:19.614243   32280 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 20:12:19.614248   32280 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 20:12:19.614256   32280 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 20:12:19.614265   32280 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:12:19.614271   32280 command_runner.go:130] > default_sysctls = [
	I1002 20:12:19.614279   32280 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 20:12:19.614284   32280 command_runner.go:130] > ]
	I1002 20:12:19.614291   32280 command_runner.go:130] > # List of devices on the host that a
	I1002 20:12:19.614299   32280 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 20:12:19.614308   32280 command_runner.go:130] > # allowed_devices = [
	I1002 20:12:19.614313   32280 command_runner.go:130] > # 	"/dev/fuse",
	I1002 20:12:19.614321   32280 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 20:12:19.614327   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614335   32280 command_runner.go:130] > # List of additional devices. specified as
	I1002 20:12:19.614349   32280 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 20:12:19.614359   32280 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 20:12:19.614368   32280 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:12:19.614376   32280 command_runner.go:130] > # additional_devices = [
	I1002 20:12:19.614381   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614388   32280 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 20:12:19.614394   32280 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 20:12:19.614398   32280 command_runner.go:130] > # 	"/etc/cdi",
	I1002 20:12:19.614402   32280 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 20:12:19.614404   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614410   32280 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 20:12:19.614416   32280 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 20:12:19.614420   32280 command_runner.go:130] > # Defaults to false.
	I1002 20:12:19.614424   32280 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 20:12:19.614432   32280 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 20:12:19.614438   32280 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 20:12:19.614441   32280 command_runner.go:130] > # hooks_dir = [
	I1002 20:12:19.614445   32280 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 20:12:19.614449   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614454   32280 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 20:12:19.614462   32280 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 20:12:19.614467   32280 command_runner.go:130] > # its default mounts from the following two files:
	I1002 20:12:19.614471   32280 command_runner.go:130] > #
	I1002 20:12:19.614476   32280 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 20:12:19.614484   32280 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 20:12:19.614489   32280 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 20:12:19.614494   32280 command_runner.go:130] > #
	I1002 20:12:19.614500   32280 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 20:12:19.614506   32280 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 20:12:19.614514   32280 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 20:12:19.614519   32280 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 20:12:19.614524   32280 command_runner.go:130] > #
	I1002 20:12:19.614528   32280 command_runner.go:130] > # default_mounts_file = ""
	I1002 20:12:19.614532   32280 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 20:12:19.614539   32280 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 20:12:19.614545   32280 command_runner.go:130] > # pids_limit = -1
	I1002 20:12:19.614551   32280 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 20:12:19.614559   32280 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 20:12:19.614564   32280 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 20:12:19.614572   32280 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 20:12:19.614578   32280 command_runner.go:130] > # log_size_max = -1
	I1002 20:12:19.614716   32280 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 20:12:19.614727   32280 command_runner.go:130] > # log_to_journald = false
	I1002 20:12:19.614733   32280 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 20:12:19.614738   32280 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 20:12:19.614745   32280 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 20:12:19.614750   32280 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 20:12:19.614757   32280 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 20:12:19.614761   32280 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 20:12:19.614766   32280 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 20:12:19.614772   32280 command_runner.go:130] > # read_only = false
	I1002 20:12:19.614777   32280 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 20:12:19.614785   32280 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 20:12:19.614789   32280 command_runner.go:130] > # live configuration reload.
	I1002 20:12:19.614795   32280 command_runner.go:130] > # log_level = "info"
	I1002 20:12:19.614800   32280 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 20:12:19.614807   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.614811   32280 command_runner.go:130] > # log_filter = ""
	I1002 20:12:19.614817   32280 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 20:12:19.614825   32280 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 20:12:19.614829   32280 command_runner.go:130] > # separated by comma.
	I1002 20:12:19.614839   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614846   32280 command_runner.go:130] > # uid_mappings = ""
	I1002 20:12:19.614851   32280 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 20:12:19.614859   32280 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 20:12:19.614863   32280 command_runner.go:130] > # separated by comma.
	I1002 20:12:19.614873   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614877   32280 command_runner.go:130] > # gid_mappings = ""
	I1002 20:12:19.614884   32280 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 20:12:19.614890   32280 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:12:19.614898   32280 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:12:19.614905   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614909   32280 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 20:12:19.614916   32280 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 20:12:19.614924   32280 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:12:19.614931   32280 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:12:19.614940   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614944   32280 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 20:12:19.614949   32280 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 20:12:19.614959   32280 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 20:12:19.614964   32280 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 20:12:19.614970   32280 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 20:12:19.614975   32280 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 20:12:19.614983   32280 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 20:12:19.614988   32280 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 20:12:19.614993   32280 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 20:12:19.614999   32280 command_runner.go:130] > # drop_infra_ctr = true
	I1002 20:12:19.615004   32280 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 20:12:19.615009   32280 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 20:12:19.615018   32280 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 20:12:19.615024   32280 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 20:12:19.615031   32280 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 20:12:19.615038   32280 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 20:12:19.615044   32280 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 20:12:19.615052   32280 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 20:12:19.615055   32280 command_runner.go:130] > # shared_cpuset = ""
	I1002 20:12:19.615063   32280 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 20:12:19.615068   32280 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 20:12:19.615073   32280 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 20:12:19.615080   32280 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 20:12:19.615086   32280 command_runner.go:130] > # pinns_path = ""
	I1002 20:12:19.615090   32280 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 20:12:19.615098   32280 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 20:12:19.615102   32280 command_runner.go:130] > # enable_criu_support = true
	I1002 20:12:19.615111   32280 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 20:12:19.615116   32280 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 20:12:19.615123   32280 command_runner.go:130] > # enable_pod_events = false
	I1002 20:12:19.615128   32280 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 20:12:19.615135   32280 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 20:12:19.615139   32280 command_runner.go:130] > # default_runtime = "crun"
	I1002 20:12:19.615146   32280 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 20:12:19.615152   32280 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 20:12:19.615161   32280 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 20:12:19.615168   32280 command_runner.go:130] > # creation as a file is not desired either.
	I1002 20:12:19.615175   32280 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 20:12:19.615182   32280 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 20:12:19.615187   32280 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 20:12:19.615190   32280 command_runner.go:130] > # ]
	I1002 20:12:19.615195   32280 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 20:12:19.615201   32280 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 20:12:19.615207   32280 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 20:12:19.615214   32280 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 20:12:19.615216   32280 command_runner.go:130] > #
	I1002 20:12:19.615221   32280 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 20:12:19.615227   32280 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 20:12:19.615231   32280 command_runner.go:130] > # runtime_type = "oci"
	I1002 20:12:19.615237   32280 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 20:12:19.615241   32280 command_runner.go:130] > # inherit_default_runtime = false
	I1002 20:12:19.615246   32280 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 20:12:19.615252   32280 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 20:12:19.615256   32280 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 20:12:19.615262   32280 command_runner.go:130] > # monitor_env = []
	I1002 20:12:19.615266   32280 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 20:12:19.615270   32280 command_runner.go:130] > # allowed_annotations = []
	I1002 20:12:19.615278   32280 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 20:12:19.615282   32280 command_runner.go:130] > # no_sync_log = false
	I1002 20:12:19.615288   32280 command_runner.go:130] > # default_annotations = {}
	I1002 20:12:19.615293   32280 command_runner.go:130] > # stream_websockets = false
	I1002 20:12:19.615299   32280 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:12:19.615333   32280 command_runner.go:130] > # Where:
	I1002 20:12:19.615343   32280 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 20:12:19.615349   32280 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 20:12:19.615354   32280 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 20:12:19.615363   32280 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 20:12:19.615366   32280 command_runner.go:130] > #   in $PATH.
	I1002 20:12:19.615375   32280 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 20:12:19.615380   32280 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 20:12:19.615387   32280 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 20:12:19.615391   32280 command_runner.go:130] > #   state.
	I1002 20:12:19.615400   32280 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 20:12:19.615413   32280 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 20:12:19.615421   32280 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 20:12:19.615428   32280 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 20:12:19.615435   32280 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 20:12:19.615441   32280 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 20:12:19.615446   32280 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 20:12:19.615452   32280 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 20:12:19.615458   32280 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 20:12:19.615465   32280 command_runner.go:130] > #   The currently recognized values are:
	I1002 20:12:19.615470   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 20:12:19.615479   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 20:12:19.615485   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 20:12:19.615490   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 20:12:19.615499   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 20:12:19.615505   32280 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 20:12:19.615514   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 20:12:19.615521   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 20:12:19.615529   32280 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 20:12:19.615534   32280 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 20:12:19.615541   32280 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 20:12:19.615549   32280 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 20:12:19.615555   32280 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 20:12:19.615564   32280 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 20:12:19.615569   32280 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 20:12:19.615579   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 20:12:19.615586   32280 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 20:12:19.615589   32280 command_runner.go:130] > #   deprecated option "conmon".
	I1002 20:12:19.615596   32280 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 20:12:19.615601   32280 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 20:12:19.615607   32280 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 20:12:19.615614   32280 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 20:12:19.615621   32280 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 20:12:19.615628   32280 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 20:12:19.615634   32280 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 20:12:19.615638   32280 command_runner.go:130] > #   conmon-rs by using:
	I1002 20:12:19.615656   32280 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 20:12:19.615668   32280 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 20:12:19.615682   32280 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 20:12:19.615690   32280 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 20:12:19.615695   32280 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 20:12:19.615704   32280 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 20:12:19.615712   32280 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 20:12:19.615720   32280 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 20:12:19.615731   32280 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 20:12:19.615747   32280 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 20:12:19.615756   32280 command_runner.go:130] > #   when a machine crash happens.
	I1002 20:12:19.615765   32280 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 20:12:19.615774   32280 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 20:12:19.615784   32280 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 20:12:19.615788   32280 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 20:12:19.615797   32280 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 20:12:19.615804   32280 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 20:12:19.615810   32280 command_runner.go:130] > #
	I1002 20:12:19.615818   32280 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 20:12:19.615826   32280 command_runner.go:130] > #
	I1002 20:12:19.615838   32280 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 20:12:19.615850   32280 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 20:12:19.615854   32280 command_runner.go:130] > #
	I1002 20:12:19.615860   32280 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 20:12:19.615868   32280 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 20:12:19.615871   32280 command_runner.go:130] > #
	I1002 20:12:19.615880   32280 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 20:12:19.615889   32280 command_runner.go:130] > # feature.
	I1002 20:12:19.615894   32280 command_runner.go:130] > #
	I1002 20:12:19.615906   32280 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 20:12:19.615918   32280 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 20:12:19.615931   32280 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 20:12:19.615943   32280 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 20:12:19.615954   32280 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 20:12:19.615957   32280 command_runner.go:130] > #
	I1002 20:12:19.615964   32280 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 20:12:19.615972   32280 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 20:12:19.615977   32280 command_runner.go:130] > #
	I1002 20:12:19.615989   32280 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 20:12:19.616001   32280 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 20:12:19.616010   32280 command_runner.go:130] > #
	I1002 20:12:19.616019   32280 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 20:12:19.616031   32280 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 20:12:19.616039   32280 command_runner.go:130] > # limitation.
	I1002 20:12:19.616045   32280 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 20:12:19.616054   32280 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 20:12:19.616058   32280 command_runner.go:130] > runtime_type = ""
	I1002 20:12:19.616063   32280 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 20:12:19.616073   32280 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:12:19.616082   32280 command_runner.go:130] > runtime_config_path = ""
	I1002 20:12:19.616091   32280 command_runner.go:130] > container_min_memory = ""
	I1002 20:12:19.616098   32280 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:12:19.616107   32280 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:12:19.616115   32280 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:12:19.616124   32280 command_runner.go:130] > allowed_annotations = [
	I1002 20:12:19.616131   32280 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 20:12:19.616137   32280 command_runner.go:130] > ]
	I1002 20:12:19.616141   32280 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:12:19.616146   32280 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 20:12:19.616157   32280 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 20:12:19.616163   32280 command_runner.go:130] > runtime_type = ""
	I1002 20:12:19.616173   32280 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 20:12:19.616180   32280 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:12:19.616189   32280 command_runner.go:130] > runtime_config_path = ""
	I1002 20:12:19.616196   32280 command_runner.go:130] > container_min_memory = ""
	I1002 20:12:19.616206   32280 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:12:19.616215   32280 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:12:19.616221   32280 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:12:19.616228   32280 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:12:19.616238   32280 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 20:12:19.616247   32280 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 20:12:19.616258   32280 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 20:12:19.616272   32280 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 20:12:19.616289   32280 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 20:12:19.616305   32280 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 20:12:19.616314   32280 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 20:12:19.616323   32280 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 20:12:19.616340   32280 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 20:12:19.616353   32280 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 20:12:19.616366   32280 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 20:12:19.616380   32280 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 20:12:19.616387   32280 command_runner.go:130] > # Example:
	I1002 20:12:19.616393   32280 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 20:12:19.616401   32280 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 20:12:19.616408   32280 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 20:12:19.616420   32280 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 20:12:19.616430   32280 command_runner.go:130] > # cpuset = "0-1"
	I1002 20:12:19.616435   32280 command_runner.go:130] > # cpushares = "5"
	I1002 20:12:19.616442   32280 command_runner.go:130] > # cpuquota = "1000"
	I1002 20:12:19.616451   32280 command_runner.go:130] > # cpuperiod = "100000"
	I1002 20:12:19.616457   32280 command_runner.go:130] > # cpulimit = "35"
	I1002 20:12:19.616466   32280 command_runner.go:130] > # Where:
	I1002 20:12:19.616473   32280 command_runner.go:130] > # The workload name is workload-type.
	I1002 20:12:19.616483   32280 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 20:12:19.616489   32280 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 20:12:19.616502   32280 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 20:12:19.616516   32280 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 20:12:19.616528   32280 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 20:12:19.616541   32280 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 20:12:19.616551   32280 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 20:12:19.616560   32280 command_runner.go:130] > # Default value is set to true
	I1002 20:12:19.616566   32280 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 20:12:19.616574   32280 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 20:12:19.616582   32280 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 20:12:19.616592   32280 command_runner.go:130] > # Default value is set to 'false'
	I1002 20:12:19.616601   32280 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 20:12:19.616612   32280 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 20:12:19.616624   32280 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 20:12:19.616632   32280 command_runner.go:130] > # timezone = ""
	I1002 20:12:19.616642   32280 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 20:12:19.616658   32280 command_runner.go:130] > #
	I1002 20:12:19.616667   32280 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 20:12:19.616686   32280 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 20:12:19.616695   32280 command_runner.go:130] > [crio.image]
	I1002 20:12:19.616703   32280 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 20:12:19.616714   32280 command_runner.go:130] > # default_transport = "docker://"
	I1002 20:12:19.616725   32280 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 20:12:19.616732   32280 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:12:19.616739   32280 command_runner.go:130] > # global_auth_file = ""
	I1002 20:12:19.616751   32280 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 20:12:19.616762   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.616771   32280 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.616783   32280 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 20:12:19.616795   32280 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:12:19.616804   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.616811   32280 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 20:12:19.616817   32280 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 20:12:19.616825   32280 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 20:12:19.616830   32280 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 20:12:19.616837   32280 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 20:12:19.616842   32280 command_runner.go:130] > # pause_command = "/pause"
	I1002 20:12:19.616852   32280 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 20:12:19.616864   32280 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 20:12:19.616877   32280 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 20:12:19.616889   32280 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 20:12:19.616899   32280 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 20:12:19.616911   32280 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 20:12:19.616918   32280 command_runner.go:130] > # pinned_images = [
	I1002 20:12:19.616921   32280 command_runner.go:130] > # ]
	I1002 20:12:19.616928   32280 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 20:12:19.616937   32280 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 20:12:19.616942   32280 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 20:12:19.616947   32280 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 20:12:19.616955   32280 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 20:12:19.616959   32280 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 20:12:19.616965   32280 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 20:12:19.616973   32280 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 20:12:19.616979   32280 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 20:12:19.616988   32280 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 20:12:19.616997   32280 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 20:12:19.617009   32280 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 20:12:19.617020   32280 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 20:12:19.617036   32280 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 20:12:19.617044   32280 command_runner.go:130] > # changing them here.
	I1002 20:12:19.617053   32280 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 20:12:19.617062   32280 command_runner.go:130] > # insecure_registries = [
	I1002 20:12:19.617066   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617073   32280 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 20:12:19.617078   32280 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 20:12:19.617084   32280 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 20:12:19.617089   32280 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 20:12:19.617095   32280 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 20:12:19.617101   32280 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 20:12:19.617107   32280 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 20:12:19.617111   32280 command_runner.go:130] > # auto_reload_registries = false
	I1002 20:12:19.617117   32280 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 20:12:19.617127   32280 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 20:12:19.617135   32280 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 20:12:19.617138   32280 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 20:12:19.617143   32280 command_runner.go:130] > # The mode of short name resolution.
	I1002 20:12:19.617149   32280 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 20:12:19.617158   32280 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 20:12:19.617163   32280 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 20:12:19.617169   32280 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 20:12:19.617175   32280 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 20:12:19.617182   32280 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 20:12:19.617186   32280 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 20:12:19.617192   32280 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 20:12:19.617197   32280 command_runner.go:130] > # CNI plugins.
	I1002 20:12:19.617200   32280 command_runner.go:130] > [crio.network]
	I1002 20:12:19.617206   32280 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 20:12:19.617212   32280 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 20:12:19.617219   32280 command_runner.go:130] > # cni_default_network = ""
	I1002 20:12:19.617231   32280 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 20:12:19.617240   32280 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 20:12:19.617246   32280 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 20:12:19.617250   32280 command_runner.go:130] > # plugin_dirs = [
	I1002 20:12:19.617254   32280 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 20:12:19.617256   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617261   32280 command_runner.go:130] > # List of included pod metrics.
	I1002 20:12:19.617266   32280 command_runner.go:130] > # included_pod_metrics = [
	I1002 20:12:19.617269   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617274   32280 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 20:12:19.617279   32280 command_runner.go:130] > [crio.metrics]
	I1002 20:12:19.617284   32280 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 20:12:19.617290   32280 command_runner.go:130] > # enable_metrics = false
	I1002 20:12:19.617294   32280 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 20:12:19.617298   32280 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 20:12:19.617306   32280 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 20:12:19.617312   32280 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 20:12:19.617320   32280 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 20:12:19.617323   32280 command_runner.go:130] > # metrics_collectors = [
	I1002 20:12:19.617327   32280 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 20:12:19.617331   32280 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 20:12:19.617334   32280 command_runner.go:130] > # 	"containers_oom_total",
	I1002 20:12:19.617338   32280 command_runner.go:130] > # 	"processes_defunct",
	I1002 20:12:19.617341   32280 command_runner.go:130] > # 	"operations_total",
	I1002 20:12:19.617345   32280 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 20:12:19.617348   32280 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 20:12:19.617352   32280 command_runner.go:130] > # 	"operations_errors_total",
	I1002 20:12:19.617355   32280 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 20:12:19.617359   32280 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 20:12:19.617363   32280 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 20:12:19.617367   32280 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 20:12:19.617371   32280 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 20:12:19.617375   32280 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 20:12:19.617379   32280 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 20:12:19.617383   32280 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 20:12:19.617388   32280 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 20:12:19.617391   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617397   32280 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 20:12:19.617403   32280 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 20:12:19.617407   32280 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 20:12:19.617411   32280 command_runner.go:130] > # metrics_port = 9090
	I1002 20:12:19.617415   32280 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 20:12:19.617419   32280 command_runner.go:130] > # metrics_socket = ""
	I1002 20:12:19.617423   32280 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 20:12:19.617429   32280 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 20:12:19.617437   32280 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 20:12:19.617441   32280 command_runner.go:130] > # certificate on any modification event.
	I1002 20:12:19.617447   32280 command_runner.go:130] > # metrics_cert = ""
	I1002 20:12:19.617452   32280 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 20:12:19.617456   32280 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 20:12:19.617460   32280 command_runner.go:130] > # metrics_key = ""
	I1002 20:12:19.617465   32280 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 20:12:19.617471   32280 command_runner.go:130] > [crio.tracing]
	I1002 20:12:19.617476   32280 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 20:12:19.617482   32280 command_runner.go:130] > # enable_tracing = false
	I1002 20:12:19.617488   32280 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 20:12:19.617494   32280 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 20:12:19.617500   32280 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 20:12:19.617506   32280 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 20:12:19.617511   32280 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 20:12:19.617514   32280 command_runner.go:130] > [crio.nri]
	I1002 20:12:19.617518   32280 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 20:12:19.617524   32280 command_runner.go:130] > # enable_nri = true
	I1002 20:12:19.617527   32280 command_runner.go:130] > # NRI socket to listen on.
	I1002 20:12:19.617533   32280 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 20:12:19.617539   32280 command_runner.go:130] > # NRI plugin directory to use.
	I1002 20:12:19.617543   32280 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 20:12:19.617547   32280 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 20:12:19.617552   32280 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 20:12:19.617560   32280 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 20:12:19.617591   32280 command_runner.go:130] > # nri_disable_connections = false
	I1002 20:12:19.617598   32280 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 20:12:19.617604   32280 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 20:12:19.617612   32280 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 20:12:19.617623   32280 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 20:12:19.617630   32280 command_runner.go:130] > # NRI default validator configuration.
	I1002 20:12:19.617637   32280 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 20:12:19.617645   32280 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 20:12:19.617661   32280 command_runner.go:130] > # can be restricted/rejected:
	I1002 20:12:19.617671   32280 command_runner.go:130] > # - OCI hook injection
	I1002 20:12:19.617683   32280 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 20:12:19.617691   32280 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 20:12:19.617696   32280 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 20:12:19.617702   32280 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 20:12:19.617708   32280 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 20:12:19.617715   32280 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 20:12:19.617720   32280 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 20:12:19.617722   32280 command_runner.go:130] > #
	I1002 20:12:19.617726   32280 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 20:12:19.617733   32280 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 20:12:19.617737   32280 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 20:12:19.617743   32280 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 20:12:19.617750   32280 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 20:12:19.617755   32280 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 20:12:19.617759   32280 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 20:12:19.617764   32280 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 20:12:19.617767   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617771   32280 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 20:12:19.617779   32280 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 20:12:19.617782   32280 command_runner.go:130] > [crio.stats]
	I1002 20:12:19.617787   32280 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 20:12:19.617796   32280 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 20:12:19.617800   32280 command_runner.go:130] > # stats_collection_period = 0
	I1002 20:12:19.617807   32280 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 20:12:19.617812   32280 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 20:12:19.617819   32280 command_runner.go:130] > # collection_period = 0
	I1002 20:12:19.617847   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597735388Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 20:12:19.617857   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597762161Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 20:12:19.617879   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597788561Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 20:12:19.617891   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597814431Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 20:12:19.617901   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597905829Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:19.617910   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.59812179Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 20:12:19.617937   32280 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 20:12:19.618034   32280 cni.go:84] Creating CNI manager for ""
	I1002 20:12:19.618045   32280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:12:19.618055   32280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:12:19.618074   32280 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753218 NodeName:functional-753218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:12:19.618185   32280 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:12:19.618237   32280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:12:19.625483   32280 command_runner.go:130] > kubeadm
	I1002 20:12:19.625499   32280 command_runner.go:130] > kubectl
	I1002 20:12:19.625503   32280 command_runner.go:130] > kubelet
	I1002 20:12:19.626080   32280 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:12:19.626131   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:12:19.633273   32280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:12:19.644695   32280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:12:19.656113   32280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:12:19.667414   32280 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:12:19.670740   32280 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 20:12:19.670794   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:19.752159   32280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:12:19.764280   32280 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218 for IP: 192.168.49.2
	I1002 20:12:19.764303   32280 certs.go:195] generating shared ca certs ...
	I1002 20:12:19.764324   32280 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:19.764461   32280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:12:19.764507   32280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:12:19.764516   32280 certs.go:257] generating profile certs ...
	I1002 20:12:19.764596   32280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key
	I1002 20:12:19.764641   32280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804
	I1002 20:12:19.764700   32280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key
	I1002 20:12:19.764711   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:12:19.764723   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:12:19.764735   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:12:19.764749   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:12:19.764761   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:12:19.764773   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:12:19.764785   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:12:19.764797   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:12:19.764840   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:12:19.764868   32280 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:12:19.764878   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:12:19.764907   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:12:19.764932   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:12:19.764953   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:12:19.764991   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:12:19.765016   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:19.765029   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.765042   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:12:19.765474   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:12:19.782548   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:12:19.799734   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:12:19.816390   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:12:19.832589   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:12:19.848700   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:12:19.864849   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:12:19.880775   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:12:19.896846   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:12:19.913614   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:12:19.929578   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:12:19.945677   32280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:12:19.957745   32280 ssh_runner.go:195] Run: openssl version
	I1002 20:12:19.963258   32280 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 20:12:19.963501   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:12:19.971695   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975234   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975257   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975294   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:12:20.009021   32280 command_runner.go:130] > 51391683
	I1002 20:12:20.009100   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:12:20.016966   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:12:20.025422   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029194   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029238   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029282   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.064218   32280 command_runner.go:130] > 3ec20f2e
	I1002 20:12:20.064321   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:12:20.072502   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:12:20.080739   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084507   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084542   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084576   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.118973   32280 command_runner.go:130] > b5213941
	I1002 20:12:20.119045   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:12:20.127219   32280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:12:20.130733   32280 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:12:20.130756   32280 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 20:12:20.130765   32280 command_runner.go:130] > Device: 8,1	Inode: 579408      Links: 1
	I1002 20:12:20.130774   32280 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:12:20.130783   32280 command_runner.go:130] > Access: 2025-10-02 20:08:10.644972655 +0000
	I1002 20:12:20.130793   32280 command_runner.go:130] > Modify: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130799   32280 command_runner.go:130] > Change: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130806   32280 command_runner.go:130] >  Birth: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130872   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:12:20.164340   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.164601   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:12:20.199434   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.199512   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:12:20.233489   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.233589   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:12:20.266980   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.267235   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:12:20.300792   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.301105   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:12:20.334621   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.334895   32280 kubeadm.go:400] StartCluster: {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:20.334978   32280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:12:20.335040   32280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:12:20.362233   32280 cri.go:89] found id: ""
	I1002 20:12:20.362287   32280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:12:20.370000   32280 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 20:12:20.370022   32280 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 20:12:20.370028   32280 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 20:12:20.370045   32280 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:12:20.370050   32280 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:12:20.370092   32280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:12:20.377231   32280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:12:20.377306   32280 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753218" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.377343   32280 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753218" cluster setting kubeconfig missing "functional-753218" context setting]
	I1002 20:12:20.377618   32280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.379016   32280 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.379143   32280 kapi.go:59] client config for functional-753218: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:12:20.379525   32280 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:12:20.379543   32280 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:12:20.379548   32280 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:12:20.379552   32280 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:12:20.379556   32280 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:12:20.379580   32280 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:12:20.379896   32280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:12:20.387047   32280 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:12:20.387086   32280 kubeadm.go:601] duration metric: took 17.030465ms to restartPrimaryControlPlane
	I1002 20:12:20.387097   32280 kubeadm.go:402] duration metric: took 52.210982ms to StartCluster
	I1002 20:12:20.387113   32280 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.387221   32280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.387762   32280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.387978   32280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:12:20.388069   32280 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:12:20.388123   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:20.388170   32280 addons.go:69] Setting storage-provisioner=true in profile "functional-753218"
	I1002 20:12:20.388189   32280 addons.go:238] Setting addon storage-provisioner=true in "functional-753218"
	I1002 20:12:20.388224   32280 host.go:66] Checking if "functional-753218" exists ...
	I1002 20:12:20.388188   32280 addons.go:69] Setting default-storageclass=true in profile "functional-753218"
	I1002 20:12:20.388303   32280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753218"
	I1002 20:12:20.388534   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.388593   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.390858   32280 out.go:179] * Verifying Kubernetes components...
	I1002 20:12:20.392041   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:20.408831   32280 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.409013   32280 kapi.go:59] client config for functional-753218: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:12:20.409334   32280 addons.go:238] Setting addon default-storageclass=true in "functional-753218"
	I1002 20:12:20.409372   32280 host.go:66] Checking if "functional-753218" exists ...
	I1002 20:12:20.409857   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.409921   32280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:12:20.411389   32280 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:20.411408   32280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:12:20.411451   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:20.434249   32280 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.434269   32280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:12:20.434323   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:20.437366   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:20.453124   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:20.491163   32280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:12:20.504681   32280 node_ready.go:35] waiting up to 6m0s for node "functional-753218" to be "Ready" ...
	I1002 20:12:20.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:12:20.504901   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:20.505187   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:20.544925   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:20.560749   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.598254   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.598305   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.598334   32280 retry.go:31] will retry after 360.790251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.611750   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.611829   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.611854   32280 retry.go:31] will retry after 210.270105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.822270   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.872283   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.874485   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.874514   32280 retry.go:31] will retry after 244.966298ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.959846   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.005341   32280 type.go:168] "Request Body" body=""
	I1002 20:12:21.005421   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:21.005781   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:21.012418   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.012451   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.012466   32280 retry.go:31] will retry after 409.292121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.119728   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:21.168429   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.170739   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.170771   32280 retry.go:31] will retry after 294.217693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.422106   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.465688   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:21.470239   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.472502   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.472537   32280 retry.go:31] will retry after 332.995728ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.505685   32280 type.go:168] "Request Body" body=""
	I1002 20:12:21.505778   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:21.506123   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:21.516911   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.516971   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.516996   32280 retry.go:31] will retry after 954.810325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.806393   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.857573   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.857614   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.857637   32280 retry.go:31] will retry after 1.033500231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.004877   32280 type.go:168] "Request Body" body=""
	I1002 20:12:22.004976   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:22.005310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:22.472906   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:22.505435   32280 type.go:168] "Request Body" body=""
	I1002 20:12:22.505517   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:22.505893   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:22.505957   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:22.524411   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:22.524454   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.524474   32280 retry.go:31] will retry after 931.915639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.892005   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:22.942851   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:22.942928   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.942955   32280 retry.go:31] will retry after 1.834952264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.004927   32280 type.go:168] "Request Body" body=""
	I1002 20:12:23.005007   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:23.005354   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:23.456821   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:23.505094   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.505147   32280 type.go:168] "Request Body" body=""
	I1002 20:12:23.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:23.505484   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:23.507597   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.507626   32280 retry.go:31] will retry after 2.313716894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:24.005157   32280 type.go:168] "Request Body" body=""
	I1002 20:12:24.005267   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:24.005594   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:24.505508   32280 type.go:168] "Request Body" body=""
	I1002 20:12:24.505632   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:24.506012   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:24.506092   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:24.778419   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:24.830315   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:24.830361   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:24.830382   32280 retry.go:31] will retry after 2.530323246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:25.005736   32280 type.go:168] "Request Body" body=""
	I1002 20:12:25.005808   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:25.006117   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:25.504853   32280 type.go:168] "Request Body" body=""
	I1002 20:12:25.504920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:25.505273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:25.821714   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:25.872812   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:25.872859   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:25.872881   32280 retry.go:31] will retry after 1.957365536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:26.005078   32280 type.go:168] "Request Body" body=""
	I1002 20:12:26.005153   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:26.005503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:26.505250   32280 type.go:168] "Request Body" body=""
	I1002 20:12:26.505323   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:26.505711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:27.005530   32280 type.go:168] "Request Body" body=""
	I1002 20:12:27.005599   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:27.005959   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:27.006023   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:27.361473   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:27.411520   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:27.413776   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.413807   32280 retry.go:31] will retry after 3.768585845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.504922   32280 type.go:168] "Request Body" body=""
	I1002 20:12:27.505019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:27.505351   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:27.830904   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:27.880071   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:27.882324   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.882350   32280 retry.go:31] will retry after 2.676983733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:28.005719   32280 type.go:168] "Request Body" body=""
	I1002 20:12:28.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:28.006101   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:28.504826   32280 type.go:168] "Request Body" body=""
	I1002 20:12:28.504909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:28.505226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:29.004968   32280 type.go:168] "Request Body" body=""
	I1002 20:12:29.005052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:29.005361   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:29.505178   32280 type.go:168] "Request Body" body=""
	I1002 20:12:29.505270   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:29.505576   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:29.505628   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:30.005335   32280 type.go:168] "Request Body" body=""
	I1002 20:12:30.005400   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:30.005747   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:30.505557   32280 type.go:168] "Request Body" body=""
	I1002 20:12:30.505643   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:30.505971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:30.560186   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:30.610807   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:30.610870   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:30.610892   32280 retry.go:31] will retry after 7.973230912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.005274   32280 type.go:168] "Request Body" body=""
	I1002 20:12:31.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:31.005789   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:31.182990   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:31.231953   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:31.234462   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.234491   32280 retry.go:31] will retry after 5.687657455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.504842   32280 type.go:168] "Request Body" body=""
	I1002 20:12:31.504956   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:31.505254   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:32.005885   32280 type.go:168] "Request Body" body=""
	I1002 20:12:32.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:32.006262   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:32.006314   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:32.504840   32280 type.go:168] "Request Body" body=""
	I1002 20:12:32.504910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:32.505216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:33.005827   32280 type.go:168] "Request Body" body=""
	I1002 20:12:33.005903   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:33.006210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:33.505861   32280 type.go:168] "Request Body" body=""
	I1002 20:12:33.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:33.506234   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:34.005834   32280 type.go:168] "Request Body" body=""
	I1002 20:12:34.005939   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:34.006292   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:34.006347   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:34.505067   32280 type.go:168] "Request Body" body=""
	I1002 20:12:34.505178   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:34.505476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:35.005027   32280 type.go:168] "Request Body" body=""
	I1002 20:12:35.005102   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:35.005423   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:35.504956   32280 type.go:168] "Request Body" body=""
	I1002 20:12:35.505018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:35.505338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:36.004897   32280 type.go:168] "Request Body" body=""
	I1002 20:12:36.005010   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:36.005325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:36.504908   32280 type.go:168] "Request Body" body=""
	I1002 20:12:36.504975   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:36.505273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:36.505325   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:36.922844   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:36.972691   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:36.975093   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:36.975120   32280 retry.go:31] will retry after 6.057609391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:37.005334   32280 type.go:168] "Request Body" body=""
	I1002 20:12:37.005422   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:37.005758   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:37.505360   32280 type.go:168] "Request Body" body=""
	I1002 20:12:37.505473   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:37.505826   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:38.005595   32280 type.go:168] "Request Body" body=""
	I1002 20:12:38.005685   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:38.005995   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:38.505731   32280 type.go:168] "Request Body" body=""
	I1002 20:12:38.505833   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:38.506204   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:38.506258   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:38.584343   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:38.634498   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:38.634541   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:38.634559   32280 retry.go:31] will retry after 11.473349324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:39.004966   32280 type.go:168] "Request Body" body=""
	I1002 20:12:39.005047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:39.005329   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:39.505287   32280 type.go:168] "Request Body" body=""
	I1002 20:12:39.505349   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:39.505690   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:40.005217   32280 type.go:168] "Request Body" body=""
	I1002 20:12:40.005283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:40.005689   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:40.505522   32280 type.go:168] "Request Body" body=""
	I1002 20:12:40.505586   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:40.505931   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:41.005519   32280 type.go:168] "Request Body" body=""
	I1002 20:12:41.005620   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:41.005984   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:41.006049   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:41.505595   32280 type.go:168] "Request Body" body=""
	I1002 20:12:41.505678   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:41.506021   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:42.005588   32280 type.go:168] "Request Body" body=""
	I1002 20:12:42.005666   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:42.005990   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:42.505580   32280 type.go:168] "Request Body" body=""
	I1002 20:12:42.505660   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:42.506010   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:43.005624   32280 type.go:168] "Request Body" body=""
	I1002 20:12:43.005704   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:43.006025   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:43.006077   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:43.033216   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:43.084626   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:43.084680   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:43.084700   32280 retry.go:31] will retry after 13.696949746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:43.504971   32280 type.go:168] "Request Body" body=""
	I1002 20:12:43.505052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:43.505379   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:44.004912   32280 type.go:168] "Request Body" body=""
	I1002 20:12:44.004988   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:44.005285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:44.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:12:44.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:44.505402   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:45.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:12:45.005026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:45.005321   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:45.504904   32280 type.go:168] "Request Body" body=""
	I1002 20:12:45.504997   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:45.505300   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:45.505354   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:46.004960   32280 type.go:168] "Request Body" body=""
	I1002 20:12:46.005023   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:46.005350   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:46.504882   32280 type.go:168] "Request Body" body=""
	I1002 20:12:46.505005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:46.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:47.004909   32280 type.go:168] "Request Body" body=""
	I1002 20:12:47.004973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:47.005265   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:47.505882   32280 type.go:168] "Request Body" body=""
	I1002 20:12:47.506000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:47.506320   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:47.506400   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:48.004928   32280 type.go:168] "Request Body" body=""
	I1002 20:12:48.005004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:48.005305   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:48.504865   32280 type.go:168] "Request Body" body=""
	I1002 20:12:48.504959   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:48.505270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:49.004954   32280 type.go:168] "Request Body" body=""
	I1002 20:12:49.005020   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:49.005323   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:49.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:12:49.505108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:49.505418   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:50.004957   32280 type.go:168] "Request Body" body=""
	I1002 20:12:50.005023   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:50.005336   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:50.005399   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:50.108603   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:50.158622   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:50.158675   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:50.158705   32280 retry.go:31] will retry after 7.866512619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:50.505487   32280 type.go:168] "Request Body" body=""
	I1002 20:12:50.505555   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:50.505903   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:51.005559   32280 type.go:168] "Request Body" body=""
	I1002 20:12:51.005635   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:51.005990   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:51.505707   32280 type.go:168] "Request Body" body=""
	I1002 20:12:51.505791   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:51.506153   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:52.005777   32280 type.go:168] "Request Body" body=""
	I1002 20:12:52.005901   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:52.006225   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:52.006281   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:52.504874   32280 type.go:168] "Request Body" body=""
	I1002 20:12:52.504935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:52.505268   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:53.005873   32280 type.go:168] "Request Body" body=""
	I1002 20:12:53.005951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:53.006260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:53.504881   32280 type.go:168] "Request Body" body=""
	I1002 20:12:53.505006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:53.505318   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:54.004965   32280 type.go:168] "Request Body" body=""
	I1002 20:12:54.005040   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:54.005355   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:54.505336   32280 type.go:168] "Request Body" body=""
	I1002 20:12:54.505429   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:54.505803   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:54.505860   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:55.005500   32280 type.go:168] "Request Body" body=""
	I1002 20:12:55.005582   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:55.005971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:55.505630   32280 type.go:168] "Request Body" body=""
	I1002 20:12:55.505727   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:55.506074   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:56.005749   32280 type.go:168] "Request Body" body=""
	I1002 20:12:56.005828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:56.006175   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:56.505817   32280 type.go:168] "Request Body" body=""
	I1002 20:12:56.505912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:56.506244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:56.506305   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:56.782639   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:56.831722   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:56.833971   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:56.834005   32280 retry.go:31] will retry after 8.803585786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:57.005357   32280 type.go:168] "Request Body" body=""
	I1002 20:12:57.005440   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:57.005756   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:57.505340   32280 type.go:168] "Request Body" body=""
	I1002 20:12:57.505420   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:57.505751   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:58.005333   32280 type.go:168] "Request Body" body=""
	I1002 20:12:58.005402   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:58.005752   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:58.025966   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:58.074036   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:58.076335   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:58.076367   32280 retry.go:31] will retry after 21.837732561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:58.504884   32280 type.go:168] "Request Body" body=""
	I1002 20:12:58.504952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:58.505269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:59.005019   32280 type.go:168] "Request Body" body=""
	I1002 20:12:59.005088   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:59.005416   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:59.005476   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:59.505294   32280 type.go:168] "Request Body" body=""
	I1002 20:12:59.505371   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:59.505719   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:00.005587   32280 type.go:168] "Request Body" body=""
	I1002 20:13:00.005681   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:00.006070   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:00.505895   32280 type.go:168] "Request Body" body=""
	I1002 20:13:00.505970   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:00.506282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:01.005032   32280 type.go:168] "Request Body" body=""
	I1002 20:13:01.005101   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:01.005454   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:01.005507   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:01.505230   32280 type.go:168] "Request Body" body=""
	I1002 20:13:01.505332   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:01.505713   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:02.005565   32280 type.go:168] "Request Body" body=""
	I1002 20:13:02.005638   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:02.005989   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:02.505747   32280 type.go:168] "Request Body" body=""
	I1002 20:13:02.505834   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:02.506161   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:03.004921   32280 type.go:168] "Request Body" body=""
	I1002 20:13:03.004999   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:03.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:03.505030   32280 type.go:168] "Request Body" body=""
	I1002 20:13:03.505163   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:03.505496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:03.505553   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:04.005013   32280 type.go:168] "Request Body" body=""
	I1002 20:13:04.005102   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:04.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:04.505235   32280 type.go:168] "Request Body" body=""
	I1002 20:13:04.505310   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:04.505603   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:05.005373   32280 type.go:168] "Request Body" body=""
	I1002 20:13:05.005436   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:05.005779   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:05.505626   32280 type.go:168] "Request Body" body=""
	I1002 20:13:05.505713   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:05.506017   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:05.506071   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:05.638454   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:05.690182   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:05.690237   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:05.690256   32280 retry.go:31] will retry after 17.824989731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:06.005701   32280 type.go:168] "Request Body" body=""
	I1002 20:13:06.005799   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:06.006119   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:06.504842   32280 type.go:168] "Request Body" body=""
	I1002 20:13:06.504914   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:06.505251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:07.005004   32280 type.go:168] "Request Body" body=""
	I1002 20:13:07.005108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:07.005436   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:07.505210   32280 type.go:168] "Request Body" body=""
	I1002 20:13:07.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:07.505609   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:08.005363   32280 type.go:168] "Request Body" body=""
	I1002 20:13:08.005446   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:08.005783   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:08.005845   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:08.505633   32280 type.go:168] "Request Body" body=""
	I1002 20:13:08.505725   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:08.506087   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:09.004810   32280 type.go:168] "Request Body" body=""
	I1002 20:13:09.004939   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:09.005246   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:09.505036   32280 type.go:168] "Request Body" body=""
	I1002 20:13:09.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:09.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:10.005227   32280 type.go:168] "Request Body" body=""
	I1002 20:13:10.005294   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:10.005624   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:10.505218   32280 type.go:168] "Request Body" body=""
	I1002 20:13:10.505284   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:10.505609   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:10.505692   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:11.005490   32280 type.go:168] "Request Body" body=""
	I1002 20:13:11.005558   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:11.005879   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:11.505739   32280 type.go:168] "Request Body" body=""
	I1002 20:13:11.505817   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:11.506182   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:12.004937   32280 type.go:168] "Request Body" body=""
	I1002 20:13:12.005026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:12.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:12.505102   32280 type.go:168] "Request Body" body=""
	I1002 20:13:12.505168   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:12.505509   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:13.005242   32280 type.go:168] "Request Body" body=""
	I1002 20:13:13.005316   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:13.005692   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:13.005741   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:13.505519   32280 type.go:168] "Request Body" body=""
	I1002 20:13:13.505584   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:13.505958   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:14.005767   32280 type.go:168] "Request Body" body=""
	I1002 20:13:14.005841   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:14.006164   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:14.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:13:14.505069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:14.505397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:15.005101   32280 type.go:168] "Request Body" body=""
	I1002 20:13:15.005189   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:15.005569   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:15.505328   32280 type.go:168] "Request Body" body=""
	I1002 20:13:15.505404   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:15.505799   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:15.505864   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:16.005581   32280 type.go:168] "Request Body" body=""
	I1002 20:13:16.005659   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:16.006015   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:16.505815   32280 type.go:168] "Request Body" body=""
	I1002 20:13:16.505909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:16.506240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:17.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:13:17.004989   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:17.005317   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:17.505042   32280 type.go:168] "Request Body" body=""
	I1002 20:13:17.505108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:17.505466   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:18.005185   32280 type.go:168] "Request Body" body=""
	I1002 20:13:18.005248   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:18.005594   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:18.005675   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:18.505365   32280 type.go:168] "Request Body" body=""
	I1002 20:13:18.505431   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:18.505829   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.005617   32280 type.go:168] "Request Body" body=""
	I1002 20:13:19.005703   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:19.006054   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.505860   32280 type.go:168] "Request Body" body=""
	I1002 20:13:19.505925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:19.506274   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.914795   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:13:19.964946   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:19.964982   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:19.964998   32280 retry.go:31] will retry after 37.877741779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:20.005163   32280 type.go:168] "Request Body" body=""
	I1002 20:13:20.005260   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:20.005579   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:20.505603   32280 type.go:168] "Request Body" body=""
	I1002 20:13:20.505696   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:20.506040   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:20.506105   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:21.005687   32280 type.go:168] "Request Body" body=""
	I1002 20:13:21.005752   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:21.006074   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:21.505754   32280 type.go:168] "Request Body" body=""
	I1002 20:13:21.505828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:21.506211   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:22.005841   32280 type.go:168] "Request Body" body=""
	I1002 20:13:22.005906   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:22.006231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:22.505901   32280 type.go:168] "Request Body" body=""
	I1002 20:13:22.506010   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:22.506365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:22.506463   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:23.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:13:23.005035   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:23.005390   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:23.504963   32280 type.go:168] "Request Body" body=""
	I1002 20:13:23.505048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:23.505365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:23.515608   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:23.566822   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:23.566879   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:23.566903   32280 retry.go:31] will retry after 23.13190401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:24.005366   32280 type.go:168] "Request Body" body=""
	I1002 20:13:24.005433   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:24.005789   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:24.505700   32280 type.go:168] "Request Body" body=""
	I1002 20:13:24.505774   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:24.506172   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:25.005817   32280 type.go:168] "Request Body" body=""
	I1002 20:13:25.005885   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:25.006218   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:25.006274   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:25.505892   32280 type.go:168] "Request Body" body=""
	I1002 20:13:25.505960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:25.506325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:26.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:13:26.005093   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:26.005420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:26.505011   32280 type.go:168] "Request Body" body=""
	I1002 20:13:26.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:26.505477   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:27.005016   32280 type.go:168] "Request Body" body=""
	I1002 20:13:27.005085   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:27.005415   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:27.505000   32280 type.go:168] "Request Body" body=""
	I1002 20:13:27.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:27.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:27.505471   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:28.004995   32280 type.go:168] "Request Body" body=""
	I1002 20:13:28.005059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:28.005387   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:28.504979   32280 type.go:168] "Request Body" body=""
	I1002 20:13:28.505063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:28.505426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:29.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:13:29.005070   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:29.005385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:29.505292   32280 type.go:168] "Request Body" body=""
	I1002 20:13:29.505364   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:29.505745   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:29.505830   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:30.005263   32280 type.go:168] "Request Body" body=""
	I1002 20:13:30.005354   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:30.005711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:30.505565   32280 type.go:168] "Request Body" body=""
	I1002 20:13:30.505630   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:30.505975   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:31.005629   32280 type.go:168] "Request Body" body=""
	I1002 20:13:31.005725   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:31.006066   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:31.505717   32280 type.go:168] "Request Body" body=""
	I1002 20:13:31.505806   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:31.506146   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:31.506205   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:32.005772   32280 type.go:168] "Request Body" body=""
	I1002 20:13:32.005834   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:32.006141   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:32.505757   32280 type.go:168] "Request Body" body=""
	I1002 20:13:32.505827   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:32.506202   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:33.005813   32280 type.go:168] "Request Body" body=""
	I1002 20:13:33.005879   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:33.006207   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:33.505854   32280 type.go:168] "Request Body" body=""
	I1002 20:13:33.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:33.506299   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:33.506364   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:34.004865   32280 type.go:168] "Request Body" body=""
	I1002 20:13:34.004937   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:34.005277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:34.505059   32280 type.go:168] "Request Body" body=""
	I1002 20:13:34.505145   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:34.505557   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:35.005136   32280 type.go:168] "Request Body" body=""
	I1002 20:13:35.005210   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:35.005522   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:35.505130   32280 type.go:168] "Request Body" body=""
	I1002 20:13:35.505200   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:35.505574   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:36.005135   32280 type.go:168] "Request Body" body=""
	I1002 20:13:36.005205   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:36.005539   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:36.005593   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:36.505113   32280 type.go:168] "Request Body" body=""
	I1002 20:13:36.505181   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:36.505623   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:37.005206   32280 type.go:168] "Request Body" body=""
	I1002 20:13:37.005280   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:37.005599   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:37.505187   32280 type.go:168] "Request Body" body=""
	I1002 20:13:37.505253   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:37.505612   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:38.005212   32280 type.go:168] "Request Body" body=""
	I1002 20:13:38.005309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:38.005632   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:38.005716   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:38.505228   32280 type.go:168] "Request Body" body=""
	I1002 20:13:38.505309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:38.505743   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:39.005283   32280 type.go:168] "Request Body" body=""
	I1002 20:13:39.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:39.005688   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:39.505535   32280 type.go:168] "Request Body" body=""
	I1002 20:13:39.505601   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:39.505971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:40.005741   32280 type.go:168] "Request Body" body=""
	I1002 20:13:40.005811   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:40.006142   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:40.006200   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:40.504914   32280 type.go:168] "Request Body" body=""
	I1002 20:13:40.504981   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:40.505341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:41.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:13:41.004987   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:41.005308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:41.504899   32280 type.go:168] "Request Body" body=""
	I1002 20:13:41.504961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:41.505269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:42.004835   32280 type.go:168] "Request Body" body=""
	I1002 20:13:42.004910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:42.005229   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:42.504815   32280 type.go:168] "Request Body" body=""
	I1002 20:13:42.504896   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:42.505252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:42.505312   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:43.004917   32280 type.go:168] "Request Body" body=""
	I1002 20:13:43.004992   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:43.005315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:43.504906   32280 type.go:168] "Request Body" body=""
	I1002 20:13:43.504998   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:43.505371   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:44.004978   32280 type.go:168] "Request Body" body=""
	I1002 20:13:44.005041   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:44.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:44.505515   32280 type.go:168] "Request Body" body=""
	I1002 20:13:44.505582   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:44.505949   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:44.505999   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:45.005614   32280 type.go:168] "Request Body" body=""
	I1002 20:13:45.005720   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:45.006047   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:45.505675   32280 type.go:168] "Request Body" body=""
	I1002 20:13:45.505766   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:45.506082   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:46.005784   32280 type.go:168] "Request Body" body=""
	I1002 20:13:46.005862   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:46.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:46.505803   32280 type.go:168] "Request Body" body=""
	I1002 20:13:46.505894   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:46.506217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:46.506269   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:46.699644   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:46.747344   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:46.749844   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:46.749973   32280 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:13:47.005313   32280 type.go:168] "Request Body" body=""
	I1002 20:13:47.005446   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:47.005788   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:47.505665   32280 type.go:168] "Request Body" body=""
	I1002 20:13:47.505730   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:47.506069   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:48.005897   32280 type.go:168] "Request Body" body=""
	I1002 20:13:48.005960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:48.006265   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:48.505011   32280 type.go:168] "Request Body" body=""
	I1002 20:13:48.505103   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:48.505428   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:49.005178   32280 type.go:168] "Request Body" body=""
	I1002 20:13:49.005244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:49.005588   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:49.005688   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:49.505357   32280 type.go:168] "Request Body" body=""
	I1002 20:13:49.505439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:49.505750   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:50.005608   32280 type.go:168] "Request Body" body=""
	I1002 20:13:50.005698   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:50.006038   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:50.504871   32280 type.go:168] "Request Body" body=""
	I1002 20:13:50.504975   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:50.505342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:51.005115   32280 type.go:168] "Request Body" body=""
	I1002 20:13:51.005179   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:51.005488   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:51.505213   32280 type.go:168] "Request Body" body=""
	I1002 20:13:51.505301   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:51.505613   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:51.505717   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:52.005522   32280 type.go:168] "Request Body" body=""
	I1002 20:13:52.005612   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:52.005939   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:52.505742   32280 type.go:168] "Request Body" body=""
	I1002 20:13:52.505819   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:52.506150   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:53.004884   32280 type.go:168] "Request Body" body=""
	I1002 20:13:53.004954   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:53.005266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:53.505024   32280 type.go:168] "Request Body" body=""
	I1002 20:13:53.505129   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:53.505472   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:54.005163   32280 type.go:168] "Request Body" body=""
	I1002 20:13:54.005247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:54.005550   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:54.005630   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:54.505374   32280 type.go:168] "Request Body" body=""
	I1002 20:13:54.505439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:54.505844   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:55.005681   32280 type.go:168] "Request Body" body=""
	I1002 20:13:55.005746   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:55.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:55.504859   32280 type.go:168] "Request Body" body=""
	I1002 20:13:55.504950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:55.505290   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:56.004973   32280 type.go:168] "Request Body" body=""
	I1002 20:13:56.005052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:56.005360   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:56.505092   32280 type.go:168] "Request Body" body=""
	I1002 20:13:56.505157   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:56.505484   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:56.505543   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:57.005232   32280 type.go:168] "Request Body" body=""
	I1002 20:13:57.005319   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:57.005627   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:57.505479   32280 type.go:168] "Request Body" body=""
	I1002 20:13:57.505542   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:57.505874   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:57.843521   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:13:57.893953   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:57.894023   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:57.894118   32280 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:13:57.896474   32280 out.go:179] * Enabled addons: 
	I1002 20:13:57.898063   32280 addons.go:514] duration metric: took 1m37.510002204s for enable addons: enabled=[]
	I1002 20:13:58.005248   32280 type.go:168] "Request Body" body=""
	I1002 20:13:58.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:58.005671   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:58.505487   32280 type.go:168] "Request Body" body=""
	I1002 20:13:58.505565   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:58.505958   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:58.506014   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:59.005771   32280 type.go:168] "Request Body" body=""
	I1002 20:13:59.005876   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:59.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:59.504962   32280 type.go:168] "Request Body" body=""
	I1002 20:13:59.505047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:59.505359   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:00.005006   32280 type.go:168] "Request Body" body=""
	I1002 20:14:00.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:00.005392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:00.505111   32280 type.go:168] "Request Body" body=""
	I1002 20:14:00.505199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:00.505503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:01.005227   32280 type.go:168] "Request Body" body=""
	I1002 20:14:01.005326   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:01.005717   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:01.005789   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:01.505598   32280 type.go:168] "Request Body" body=""
	I1002 20:14:01.505687   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:01.506000   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:02.005861   32280 type.go:168] "Request Body" body=""
	I1002 20:14:02.005935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:02.006338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:02.504980   32280 type.go:168] "Request Body" body=""
	I1002 20:14:02.505043   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:02.505444   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:03.005199   32280 type.go:168] "Request Body" body=""
	I1002 20:14:03.005295   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:03.005617   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:03.505417   32280 type.go:168] "Request Body" body=""
	I1002 20:14:03.505500   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:03.505831   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:03.505910   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:04.005688   32280 type.go:168] "Request Body" body=""
	I1002 20:14:04.005768   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:04.006079   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:04.505822   32280 type.go:168] "Request Body" body=""
	I1002 20:14:04.505929   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:04.506212   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:05.004939   32280 type.go:168] "Request Body" body=""
	I1002 20:14:05.005032   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:05.005365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:05.505085   32280 type.go:168] "Request Body" body=""
	I1002 20:14:05.505167   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:05.505489   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:06.005229   32280 type.go:168] "Request Body" body=""
	I1002 20:14:06.005293   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:06.005679   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:06.005733   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:06.505561   32280 type.go:168] "Request Body" body=""
	I1002 20:14:06.505662   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:06.505997   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:07.005758   32280 type.go:168] "Request Body" body=""
	I1002 20:14:07.005865   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:07.006186   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:07.504924   32280 type.go:168] "Request Body" body=""
	I1002 20:14:07.504999   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:07.505319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:08.005020   32280 type.go:168] "Request Body" body=""
	I1002 20:14:08.005110   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:08.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:08.505144   32280 type.go:168] "Request Body" body=""
	I1002 20:14:08.505221   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:08.505546   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:08.505597   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:09.005324   32280 type.go:168] "Request Body" body=""
	I1002 20:14:09.005388   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:09.005759   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:09.505663   32280 type.go:168] "Request Body" body=""
	I1002 20:14:09.505738   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:09.506059   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:10.004831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:10.004913   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:10.005285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:10.504951   32280 type.go:168] "Request Body" body=""
	I1002 20:14:10.505047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:10.505396   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:11.005158   32280 type.go:168] "Request Body" body=""
	I1002 20:14:11.005275   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:11.005733   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:11.005797   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:11.505549   32280 type.go:168] "Request Body" body=""
	I1002 20:14:11.505697   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:11.506073   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:12.005903   32280 type.go:168] "Request Body" body=""
	I1002 20:14:12.005966   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:12.006268   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:12.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:14:12.505086   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:12.505427   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:13.004849   32280 type.go:168] "Request Body" body=""
	I1002 20:14:13.004968   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:13.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:13.505032   32280 type.go:168] "Request Body" body=""
	I1002 20:14:13.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:13.505438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:13.505493   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:14.005138   32280 type.go:168] "Request Body" body=""
	I1002 20:14:14.005202   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:14.005533   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:14.505306   32280 type.go:168] "Request Body" body=""
	I1002 20:14:14.505402   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:14.505762   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:15.005543   32280 type.go:168] "Request Body" body=""
	I1002 20:14:15.005604   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:15.005962   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:15.505741   32280 type.go:168] "Request Body" body=""
	I1002 20:14:15.505841   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:15.506168   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:15.506245   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:16.005122   32280 type.go:168] "Request Body" body=""
	I1002 20:14:16.005232   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:16.005696   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:16.504984   32280 type.go:168] "Request Body" body=""
	I1002 20:14:16.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:16.505370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:17.004896   32280 type.go:168] "Request Body" body=""
	I1002 20:14:17.004982   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:17.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:17.504836   32280 type.go:168] "Request Body" body=""
	I1002 20:14:17.504907   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:17.505220   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:18.005868   32280 type.go:168] "Request Body" body=""
	I1002 20:14:18.005951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:18.006358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:18.006423   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:18.504940   32280 type.go:168] "Request Body" body=""
	I1002 20:14:18.505026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:18.505333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:19.004866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:19.004945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:19.005275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:19.505078   32280 type.go:168] "Request Body" body=""
	I1002 20:14:19.505155   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:19.505483   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:20.004994   32280 type.go:168] "Request Body" body=""
	I1002 20:14:20.005076   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:20.005381   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:20.505242   32280 type.go:168] "Request Body" body=""
	I1002 20:14:20.505307   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:20.505631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:20.505718   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:21.005226   32280 type.go:168] "Request Body" body=""
	I1002 20:14:21.005289   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:21.005590   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:21.505335   32280 type.go:168] "Request Body" body=""
	I1002 20:14:21.505404   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:21.505749   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:22.005375   32280 type.go:168] "Request Body" body=""
	I1002 20:14:22.005439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:22.005744   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:22.505304   32280 type.go:168] "Request Body" body=""
	I1002 20:14:22.505371   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:22.505716   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:22.505771   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:23.005272   32280 type.go:168] "Request Body" body=""
	I1002 20:14:23.005334   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:23.005644   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:23.505227   32280 type.go:168] "Request Body" body=""
	I1002 20:14:23.505324   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:23.505721   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:24.005280   32280 type.go:168] "Request Body" body=""
	I1002 20:14:24.005348   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:24.005690   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:24.505614   32280 type.go:168] "Request Body" body=""
	I1002 20:14:24.505707   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:24.506064   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:24.506123   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:25.005722   32280 type.go:168] "Request Body" body=""
	I1002 20:14:25.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:25.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:25.505754   32280 type.go:168] "Request Body" body=""
	I1002 20:14:25.505821   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:25.506147   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:26.005768   32280 type.go:168] "Request Body" body=""
	I1002 20:14:26.005838   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:26.006153   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:26.505742   32280 type.go:168] "Request Body" body=""
	I1002 20:14:26.505810   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:26.506121   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:26.506173   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:27.005763   32280 type.go:168] "Request Body" body=""
	I1002 20:14:27.005839   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:27.006182   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:27.505814   32280 type.go:168] "Request Body" body=""
	I1002 20:14:27.505878   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:27.506202   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:28.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:28.005938   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:28.006243   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:28.504821   32280 type.go:168] "Request Body" body=""
	I1002 20:14:28.504889   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:28.505244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:29.005929   32280 type.go:168] "Request Body" body=""
	I1002 20:14:29.005998   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:29.006317   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:29.006373   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:29.505885   32280 type.go:168] "Request Body" body=""
	I1002 20:14:29.505955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:29.506284   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:30.004871   32280 type.go:168] "Request Body" body=""
	I1002 20:14:30.004946   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:30.005283   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:30.505131   32280 type.go:168] "Request Body" body=""
	I1002 20:14:30.505212   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:30.505536   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:31.005137   32280 type.go:168] "Request Body" body=""
	I1002 20:14:31.005230   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:31.005549   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:31.505115   32280 type.go:168] "Request Body" body=""
	I1002 20:14:31.505177   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:31.505493   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:31.505544   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:32.005077   32280 type.go:168] "Request Body" body=""
	I1002 20:14:32.005142   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:32.005447   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:32.505767   32280 type.go:168] "Request Body" body=""
	I1002 20:14:32.505835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:32.506138   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:33.005842   32280 type.go:168] "Request Body" body=""
	I1002 20:14:33.005927   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:33.006231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:33.505868   32280 type.go:168] "Request Body" body=""
	I1002 20:14:33.505947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:33.506252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:33.506315   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:34.004818   32280 type.go:168] "Request Body" body=""
	I1002 20:14:34.004919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:34.005210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:34.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:14:34.505008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:34.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:35.004949   32280 type.go:168] "Request Body" body=""
	I1002 20:14:35.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:35.005319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:35.505837   32280 type.go:168] "Request Body" body=""
	I1002 20:14:35.505935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:35.506248   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:36.005867   32280 type.go:168] "Request Body" body=""
	I1002 20:14:36.005936   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:36.006232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:36.006283   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:36.505902   32280 type.go:168] "Request Body" body=""
	I1002 20:14:36.506056   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:36.506384   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:37.004951   32280 type.go:168] "Request Body" body=""
	I1002 20:14:37.005021   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:37.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:37.504906   32280 type.go:168] "Request Body" body=""
	I1002 20:14:37.504995   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:37.505334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:38.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:14:38.004944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:38.005255   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:38.504831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:38.504917   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:38.505277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:38.505331   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:39.004819   32280 type.go:168] "Request Body" body=""
	I1002 20:14:39.004911   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:39.005204   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:39.505017   32280 type.go:168] "Request Body" body=""
	I1002 20:14:39.505087   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:39.505399   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:40.005080   32280 type.go:168] "Request Body" body=""
	I1002 20:14:40.005144   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:40.005445   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:40.505248   32280 type.go:168] "Request Body" body=""
	I1002 20:14:40.505310   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:40.505614   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:40.505711   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:41.005196   32280 type.go:168] "Request Body" body=""
	I1002 20:14:41.005309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:41.005631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:41.505223   32280 type.go:168] "Request Body" body=""
	I1002 20:14:41.505304   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:41.505623   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:42.005154   32280 type.go:168] "Request Body" body=""
	I1002 20:14:42.005238   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:42.005535   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:42.505095   32280 type.go:168] "Request Body" body=""
	I1002 20:14:42.505175   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:42.505514   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:43.005064   32280 type.go:168] "Request Body" body=""
	I1002 20:14:43.005128   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:43.005441   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:43.005493   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:43.504991   32280 type.go:168] "Request Body" body=""
	I1002 20:14:43.505079   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:43.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:44.004948   32280 type.go:168] "Request Body" body=""
	I1002 20:14:44.005018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:44.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:44.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:14:44.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:44.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:45.004946   32280 type.go:168] "Request Body" body=""
	I1002 20:14:45.005005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:45.005307   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:45.504859   32280 type.go:168] "Request Body" body=""
	I1002 20:14:45.504931   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:45.505245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:45.505309   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:46.005851   32280 type.go:168] "Request Body" body=""
	I1002 20:14:46.005934   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:46.006245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:46.505842   32280 type.go:168] "Request Body" body=""
	I1002 20:14:46.505929   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:46.506226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:47.005902   32280 type.go:168] "Request Body" body=""
	I1002 20:14:47.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:47.006270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:47.504848   32280 type.go:168] "Request Body" body=""
	I1002 20:14:47.504912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:47.505208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:48.005819   32280 type.go:168] "Request Body" body=""
	I1002 20:14:48.005910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:48.006200   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:48.006262   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:48.504839   32280 type.go:168] "Request Body" body=""
	I1002 20:14:48.504925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:48.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:49.004816   32280 type.go:168] "Request Body" body=""
	I1002 20:14:49.004911   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:49.005214   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:49.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:14:49.505022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:49.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:50.004888   32280 type.go:168] "Request Body" body=""
	I1002 20:14:50.004963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:50.005258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:50.505167   32280 type.go:168] "Request Body" body=""
	I1002 20:14:50.505271   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:50.505603   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:50.505700   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:51.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:51.005941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:51.006228   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:51.505859   32280 type.go:168] "Request Body" body=""
	I1002 20:14:51.505973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:51.506301   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:52.004831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:52.004912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:52.005216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:52.504814   32280 type.go:168] "Request Body" body=""
	I1002 20:14:52.504898   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:52.505216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:53.005826   32280 type.go:168] "Request Body" body=""
	I1002 20:14:53.005886   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:53.006180   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:53.006232   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:53.505812   32280 type.go:168] "Request Body" body=""
	I1002 20:14:53.505888   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:53.506201   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:54.005808   32280 type.go:168] "Request Body" body=""
	I1002 20:14:54.005871   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:54.006166   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:54.504871   32280 type.go:168] "Request Body" body=""
	I1002 20:14:54.504938   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:54.505247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:55.004807   32280 type.go:168] "Request Body" body=""
	I1002 20:14:55.004892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:55.005219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:55.505889   32280 type.go:168] "Request Body" body=""
	I1002 20:14:55.505973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:55.506277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:55.506339   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:56.004856   32280 type.go:168] "Request Body" body=""
	I1002 20:14:56.004932   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:56.005222   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:56.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:14:56.504963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:56.505264   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:57.004822   32280 type.go:168] "Request Body" body=""
	I1002 20:14:57.004940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:57.005238   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:57.505875   32280 type.go:168] "Request Body" body=""
	I1002 20:14:57.505940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:57.506273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:58.005858   32280 type.go:168] "Request Body" body=""
	I1002 20:14:58.005932   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:58.006233   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:58.006297   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:58.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:14:58.504910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:58.505221   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:59.005853   32280 type.go:168] "Request Body" body=""
	I1002 20:14:59.005916   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:59.006215   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:59.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:14:59.505079   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:59.505422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:00.005901   32280 type.go:168] "Request Body" body=""
	I1002 20:15:00.005989   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:00.006298   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:00.006348   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:00.505148   32280 type.go:168] "Request Body" body=""
	I1002 20:15:00.505241   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:00.505605   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:01.005169   32280 type.go:168] "Request Body" body=""
	I1002 20:15:01.005247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:01.005557   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:01.505254   32280 type.go:168] "Request Body" body=""
	I1002 20:15:01.505323   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:01.505705   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:02.004981   32280 type.go:168] "Request Body" body=""
	I1002 20:15:02.005068   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:02.005397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:02.505008   32280 type.go:168] "Request Body" body=""
	I1002 20:15:02.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:02.505394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:02.505450   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:03.004993   32280 type.go:168] "Request Body" body=""
	I1002 20:15:03.005058   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:03.005394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:03.504950   32280 type.go:168] "Request Body" body=""
	I1002 20:15:03.505020   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:03.505326   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:04.004918   32280 type.go:168] "Request Body" body=""
	I1002 20:15:04.004994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:04.005296   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:04.504973   32280 type.go:168] "Request Body" body=""
	I1002 20:15:04.505039   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:04.505347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:05.004936   32280 type.go:168] "Request Body" body=""
	I1002 20:15:05.005003   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:05.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:05.005362   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:05.504869   32280 type.go:168] "Request Body" body=""
	I1002 20:15:05.504948   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:05.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:06.004882   32280 type.go:168] "Request Body" body=""
	I1002 20:15:06.004968   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:06.005279   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:06.504947   32280 type.go:168] "Request Body" body=""
	I1002 20:15:06.505019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:06.505334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:07.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:15:07.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:07.005310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:07.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:15:07.505006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:07.505377   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:07.505433   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:08.004961   32280 type.go:168] "Request Body" body=""
	I1002 20:15:08.005028   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:08.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:08.504957   32280 type.go:168] "Request Body" body=""
	I1002 20:15:08.505046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:08.505358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:09.004947   32280 type.go:168] "Request Body" body=""
	I1002 20:15:09.005025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:09.005346   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:09.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:15:09.505247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:09.505575   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:09.505626   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:10.005155   32280 type.go:168] "Request Body" body=""
	I1002 20:15:10.005219   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:10.005531   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:10.505400   32280 type.go:168] "Request Body" body=""
	I1002 20:15:10.505469   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:10.505813   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:11.005490   32280 type.go:168] "Request Body" body=""
	I1002 20:15:11.005553   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:11.005896   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:11.505548   32280 type.go:168] "Request Body" body=""
	I1002 20:15:11.505612   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:11.505961   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:11.506027   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:12.005617   32280 type.go:168] "Request Body" body=""
	I1002 20:15:12.005691   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:12.005983   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:12.505700   32280 type.go:168] "Request Body" body=""
	I1002 20:15:12.505770   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:12.506098   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:13.005755   32280 type.go:168] "Request Body" body=""
	I1002 20:15:13.005828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:13.006168   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:13.505836   32280 type.go:168] "Request Body" body=""
	I1002 20:15:13.505920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:13.506241   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:13.506290   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:14.005887   32280 type.go:168] "Request Body" body=""
	I1002 20:15:14.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:14.006270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:14.505064   32280 type.go:168] "Request Body" body=""
	I1002 20:15:14.505129   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:14.505450   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:15.004995   32280 type.go:168] "Request Body" body=""
	I1002 20:15:15.005063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:15.005377   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:15.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:15:15.504986   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:15.505294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:16.004941   32280 type.go:168] "Request Body" body=""
	I1002 20:15:16.005008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:16.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:16.005376   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:16.504960   32280 type.go:168] "Request Body" body=""
	I1002 20:15:16.505033   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:16.505386   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:17.005033   32280 type.go:168] "Request Body" body=""
	I1002 20:15:17.005095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:17.005406   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:17.504971   32280 type.go:168] "Request Body" body=""
	I1002 20:15:17.505037   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:17.505351   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:18.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:15:18.005879   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:18.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:18.006247   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:18.505849   32280 type.go:168] "Request Body" body=""
	I1002 20:15:18.505919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:18.506247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:19.004886   32280 type.go:168] "Request Body" body=""
	I1002 20:15:19.004961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:19.005261   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:19.505076   32280 type.go:168] "Request Body" body=""
	I1002 20:15:19.505144   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:19.505477   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:20.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:15:20.005071   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:20.005381   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:20.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:15:20.505251   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:20.505582   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:20.505635   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:21.004962   32280 type.go:168] "Request Body" body=""
	I1002 20:15:21.005029   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:21.005332   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:21.504914   32280 type.go:168] "Request Body" body=""
	I1002 20:15:21.504977   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:21.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:22.004889   32280 type.go:168] "Request Body" body=""
	I1002 20:15:22.004987   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:22.005283   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:22.504874   32280 type.go:168] "Request Body" body=""
	I1002 20:15:22.504937   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:22.505267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:23.004838   32280 type.go:168] "Request Body" body=""
	I1002 20:15:23.004900   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:23.005227   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:23.005283   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:23.505836   32280 type.go:168] "Request Body" body=""
	I1002 20:15:23.505908   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:23.506231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:24.005841   32280 type.go:168] "Request Body" body=""
	I1002 20:15:24.005903   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:24.006198   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:24.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:15:24.505059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:24.505375   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:25.004926   32280 type.go:168] "Request Body" body=""
	I1002 20:15:25.005003   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:25.005304   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:25.005362   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:25.504905   32280 type.go:168] "Request Body" body=""
	I1002 20:15:25.504971   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:25.505275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:26.004817   32280 type.go:168] "Request Body" body=""
	I1002 20:15:26.004887   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:26.005210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:26.505879   32280 type.go:168] "Request Body" body=""
	I1002 20:15:26.506038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:26.506430   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:27.005027   32280 type.go:168] "Request Body" body=""
	I1002 20:15:27.005114   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:27.005415   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:27.005474   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:27.505002   32280 type.go:168] "Request Body" body=""
	I1002 20:15:27.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:27.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:28.004986   32280 type.go:168] "Request Body" body=""
	I1002 20:15:28.005053   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:28.005352   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:28.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:15:28.505000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:28.505364   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:29.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:15:29.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:29.005308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:29.505191   32280 type.go:168] "Request Body" body=""
	I1002 20:15:29.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:29.505637   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:29.505741   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:30.005210   32280 type.go:168] "Request Body" body=""
	I1002 20:15:30.005271   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:30.005562   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:30.505505   32280 type.go:168] "Request Body" body=""
	I1002 20:15:30.505575   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:30.505938   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:31.005554   32280 type.go:168] "Request Body" body=""
	I1002 20:15:31.005640   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:31.005967   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:31.505585   32280 type.go:168] "Request Body" body=""
	I1002 20:15:31.505683   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:31.506006   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:31.506056   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:32.005634   32280 type.go:168] "Request Body" body=""
	I1002 20:15:32.005710   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:32.006002   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:32.505666   32280 type.go:168] "Request Body" body=""
	I1002 20:15:32.505734   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:32.506032   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:33.005694   32280 type.go:168] "Request Body" body=""
	I1002 20:15:33.005768   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:33.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:33.505738   32280 type.go:168] "Request Body" body=""
	I1002 20:15:33.505801   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:33.506120   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:33.506192   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:34.005749   32280 type.go:168] "Request Body" body=""
	I1002 20:15:34.005835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:34.006190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:34.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:15:34.505063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:34.505359   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:35.004979   32280 type.go:168] "Request Body" body=""
	I1002 20:15:35.005040   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:35.005361   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:35.504958   32280 type.go:168] "Request Body" body=""
	I1002 20:15:35.505028   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:35.505325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:36.004893   32280 type.go:168] "Request Body" body=""
	I1002 20:15:36.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:36.005275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:36.005327   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:36.504861   32280 type.go:168] "Request Body" body=""
	I1002 20:15:36.504942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:36.505241   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:37.004818   32280 type.go:168] "Request Body" body=""
	I1002 20:15:37.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:37.005203   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:37.504876   32280 type.go:168] "Request Body" body=""
	I1002 20:15:37.504951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:37.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:38.004888   32280 type.go:168] "Request Body" body=""
	I1002 20:15:38.004979   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:38.005286   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:38.504969   32280 type.go:168] "Request Body" body=""
	I1002 20:15:38.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:38.505376   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:38.505429   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:39.004950   32280 type.go:168] "Request Body" body=""
	I1002 20:15:39.005018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:39.005330   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:39.505071   32280 type.go:168] "Request Body" body=""
	I1002 20:15:39.505137   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:39.505431   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:40.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:15:40.005090   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:40.005385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:40.505098   32280 type.go:168] "Request Body" body=""
	I1002 20:15:40.505197   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:40.505502   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:40.505558   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:41.005068   32280 type.go:168] "Request Body" body=""
	I1002 20:15:41.005132   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:41.005435   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:41.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:15:41.505067   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:41.505459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:42.005029   32280 type.go:168] "Request Body" body=""
	I1002 20:15:42.005101   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:42.005410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:42.505061   32280 type.go:168] "Request Body" body=""
	I1002 20:15:42.505128   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:42.505440   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:43.005053   32280 type.go:168] "Request Body" body=""
	I1002 20:15:43.005164   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:43.005534   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:43.005626   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:43.505101   32280 type.go:168] "Request Body" body=""
	I1002 20:15:43.505195   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:43.505496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:44.005084   32280 type.go:168] "Request Body" body=""
	I1002 20:15:44.005178   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:44.005496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:44.505460   32280 type.go:168] "Request Body" body=""
	I1002 20:15:44.505524   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:44.505855   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:45.005560   32280 type.go:168] "Request Body" body=""
	I1002 20:15:45.005631   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:45.005984   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:45.006035   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:45.505602   32280 type.go:168] "Request Body" body=""
	I1002 20:15:45.505705   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:45.506005   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:46.005627   32280 type.go:168] "Request Body" body=""
	I1002 20:15:46.005713   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:46.006024   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:46.505689   32280 type.go:168] "Request Body" body=""
	I1002 20:15:46.505755   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:46.506045   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:47.005272   32280 type.go:168] "Request Body" body=""
	I1002 20:15:47.005340   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:47.005666   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:47.505213   32280 type.go:168] "Request Body" body=""
	I1002 20:15:47.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:47.505638   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:47.505724   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:48.004992   32280 type.go:168] "Request Body" body=""
	I1002 20:15:48.005062   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:48.005371   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:48.504960   32280 type.go:168] "Request Body" body=""
	I1002 20:15:48.505025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:48.505343   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:49.004918   32280 type.go:168] "Request Body" body=""
	I1002 20:15:49.004982   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:49.005325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:49.505056   32280 type.go:168] "Request Body" body=""
	I1002 20:15:49.505122   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:49.505424   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:50.004984   32280 type.go:168] "Request Body" body=""
	I1002 20:15:50.005048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:50.005347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:50.005399   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:50.505099   32280 type.go:168] "Request Body" body=""
	I1002 20:15:50.505173   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:50.505478   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:51.005059   32280 type.go:168] "Request Body" body=""
	I1002 20:15:51.005133   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:51.005463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:51.505016   32280 type.go:168] "Request Body" body=""
	I1002 20:15:51.505084   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:51.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:52.005067   32280 type.go:168] "Request Body" body=""
	I1002 20:15:52.005155   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:52.005476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:52.005533   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:52.505040   32280 type.go:168] "Request Body" body=""
	I1002 20:15:52.505105   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:52.505403   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:53.004962   32280 type.go:168] "Request Body" body=""
	I1002 20:15:53.005025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:53.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:53.504924   32280 type.go:168] "Request Body" body=""
	I1002 20:15:53.505008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:53.505327   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:54.004900   32280 type.go:168] "Request Body" body=""
	I1002 20:15:54.004970   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:54.005314   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:54.505066   32280 type.go:168] "Request Body" body=""
	I1002 20:15:54.505137   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:54.505438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:54.505496   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:55.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:15:55.005067   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:55.005372   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:55.504901   32280 type.go:168] "Request Body" body=""
	I1002 20:15:55.504971   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:55.505282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:56.004915   32280 type.go:168] "Request Body" body=""
	I1002 20:15:56.004985   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:56.005314   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:56.504880   32280 type.go:168] "Request Body" body=""
	I1002 20:15:56.504955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:56.505267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:57.004835   32280 type.go:168] "Request Body" body=""
	I1002 20:15:57.004920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:57.005242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:57.005291   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:57.505877   32280 type.go:168] "Request Body" body=""
	I1002 20:15:57.505940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:57.506245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:58.005907   32280 type.go:168] "Request Body" body=""
	I1002 20:15:58.005991   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:58.006342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:58.504964   32280 type.go:168] "Request Body" body=""
	I1002 20:15:58.505032   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:58.505329   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:59.004907   32280 type.go:168] "Request Body" body=""
	I1002 20:15:59.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:59.005333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:59.005397   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:59.505208   32280 type.go:168] "Request Body" body=""
	I1002 20:15:59.505273   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:59.505578   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:00.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:16:00.005070   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:00.005368   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:00.505154   32280 type.go:168] "Request Body" body=""
	I1002 20:16:00.505223   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:00.505548   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:01.005111   32280 type.go:168] "Request Body" body=""
	I1002 20:16:01.005187   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:01.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:01.005546   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:01.505154   32280 type.go:168] "Request Body" body=""
	I1002 20:16:01.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:01.505529   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:02.005146   32280 type.go:168] "Request Body" body=""
	I1002 20:16:02.005224   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:02.005550   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:02.505113   32280 type.go:168] "Request Body" body=""
	I1002 20:16:02.505181   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:02.505501   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:03.005066   32280 type.go:168] "Request Body" body=""
	I1002 20:16:03.005132   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:03.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:03.505093   32280 type.go:168] "Request Body" body=""
	I1002 20:16:03.505162   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:03.505508   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:03.505564   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:04.005055   32280 type.go:168] "Request Body" body=""
	I1002 20:16:04.005119   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:04.005406   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:04.505180   32280 type.go:168] "Request Body" body=""
	I1002 20:16:04.505248   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:04.505566   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:05.005130   32280 type.go:168] "Request Body" body=""
	I1002 20:16:05.005192   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:05.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:05.505063   32280 type.go:168] "Request Body" body=""
	I1002 20:16:05.505131   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:05.505442   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:06.005022   32280 type.go:168] "Request Body" body=""
	I1002 20:16:06.005086   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:06.005392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:06.005444   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:06.505030   32280 type.go:168] "Request Body" body=""
	I1002 20:16:06.505095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:06.505395   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:07.004971   32280 type.go:168] "Request Body" body=""
	I1002 20:16:07.005038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:07.005337   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:07.504911   32280 type.go:168] "Request Body" body=""
	I1002 20:16:07.505004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:07.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:08.004917   32280 type.go:168] "Request Body" body=""
	I1002 20:16:08.004990   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:08.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:08.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:16:08.504958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:08.505256   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:08.505311   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:09.005884   32280 type.go:168] "Request Body" body=""
	I1002 20:16:09.005950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:09.006258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:09.505071   32280 type.go:168] "Request Body" body=""
	I1002 20:16:09.505141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:09.505485   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:10.005085   32280 type.go:168] "Request Body" body=""
	I1002 20:16:10.005150   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:10.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:10.505286   32280 type.go:168] "Request Body" body=""
	I1002 20:16:10.505357   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:10.505685   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:10.505751   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:11.005245   32280 type.go:168] "Request Body" body=""
	I1002 20:16:11.005311   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:11.005606   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:11.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:16:11.505245   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:11.505547   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:12.005105   32280 type.go:168] "Request Body" body=""
	I1002 20:16:12.005169   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:12.005459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:12.505029   32280 type.go:168] "Request Body" body=""
	I1002 20:16:12.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:12.505392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:13.005040   32280 type.go:168] "Request Body" body=""
	I1002 20:16:13.005104   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:13.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:13.005474   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:13.504990   32280 type.go:168] "Request Body" body=""
	I1002 20:16:13.505055   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:13.505357   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:14.004946   32280 type.go:168] "Request Body" body=""
	I1002 20:16:14.005015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:14.005324   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:14.505076   32280 type.go:168] "Request Body" body=""
	I1002 20:16:14.505142   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:14.505433   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:15.005063   32280 type.go:168] "Request Body" body=""
	I1002 20:16:15.005134   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:15.005446   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:15.005498   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:15.504952   32280 type.go:168] "Request Body" body=""
	I1002 20:16:15.505022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:15.505328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:16.004912   32280 type.go:168] "Request Body" body=""
	I1002 20:16:16.004990   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:16.005339   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:16.505464   32280 type.go:168] "Request Body" body=""
	I1002 20:16:16.505571   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:16.505963   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:17.005818   32280 type.go:168] "Request Body" body=""
	I1002 20:16:17.005930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:17.006240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:17.006295   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:17.504827   32280 type.go:168] "Request Body" body=""
	I1002 20:16:17.504891   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:17.505213   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:18.005877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:18.005946   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:18.006281   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:18.505257   32280 type.go:168] "Request Body" body=""
	I1002 20:16:18.505334   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:18.505711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:19.005252   32280 type.go:168] "Request Body" body=""
	I1002 20:16:19.005317   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:19.005634   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:19.505459   32280 type.go:168] "Request Body" body=""
	I1002 20:16:19.505521   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:19.505917   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:19.505979   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:20.005531   32280 type.go:168] "Request Body" body=""
	I1002 20:16:20.005594   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:20.005938   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:20.505740   32280 type.go:168] "Request Body" body=""
	I1002 20:16:20.505803   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:20.506120   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:21.005728   32280 type.go:168] "Request Body" body=""
	I1002 20:16:21.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:21.006134   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:21.505734   32280 type.go:168] "Request Body" body=""
	I1002 20:16:21.505799   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:21.506152   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:21.506214   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:22.005776   32280 type.go:168] "Request Body" body=""
	I1002 20:16:22.005835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:22.006129   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:22.505854   32280 type.go:168] "Request Body" body=""
	I1002 20:16:22.505921   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:22.506271   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:23.004819   32280 type.go:168] "Request Body" body=""
	I1002 20:16:23.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:23.005226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:23.504886   32280 type.go:168] "Request Body" body=""
	I1002 20:16:23.504953   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:23.505310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:24.004892   32280 type.go:168] "Request Body" body=""
	I1002 20:16:24.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:24.005258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:24.005327   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:24.505080   32280 type.go:168] "Request Body" body=""
	I1002 20:16:24.505161   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:24.505504   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:25.005053   32280 type.go:168] "Request Body" body=""
	I1002 20:16:25.005119   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:25.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:25.505026   32280 type.go:168] "Request Body" body=""
	I1002 20:16:25.505087   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:25.505410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:26.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:16:26.005021   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:26.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:26.005378   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:26.504910   32280 type.go:168] "Request Body" body=""
	I1002 20:16:26.504977   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:26.505326   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:27.005842   32280 type.go:168] "Request Body" body=""
	I1002 20:16:27.005906   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:27.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:27.505877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:27.505952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:27.506276   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:28.004832   32280 type.go:168] "Request Body" body=""
	I1002 20:16:28.004908   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:28.005212   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:28.505846   32280 type.go:168] "Request Body" body=""
	I1002 20:16:28.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:28.506279   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:28.506330   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:29.004829   32280 type.go:168] "Request Body" body=""
	I1002 20:16:29.004904   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:29.005217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:29.505056   32280 type.go:168] "Request Body" body=""
	I1002 20:16:29.505125   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:29.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:30.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:16:30.005075   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:30.005370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:30.505105   32280 type.go:168] "Request Body" body=""
	I1002 20:16:30.505170   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:30.505455   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:31.005091   32280 type.go:168] "Request Body" body=""
	I1002 20:16:31.005160   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:31.005463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:31.005521   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:31.504995   32280 type.go:168] "Request Body" body=""
	I1002 20:16:31.505061   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:31.505362   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:32.005845   32280 type.go:168] "Request Body" body=""
	I1002 20:16:32.005909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:32.006188   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:32.505814   32280 type.go:168] "Request Body" body=""
	I1002 20:16:32.505878   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:32.506185   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:33.005817   32280 type.go:168] "Request Body" body=""
	I1002 20:16:33.005884   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:33.006190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:33.006257   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:33.505816   32280 type.go:168] "Request Body" body=""
	I1002 20:16:33.505892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:33.506205   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:34.005835   32280 type.go:168] "Request Body" body=""
	I1002 20:16:34.005898   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:34.006219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:34.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:16:34.504996   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:34.505358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:35.004928   32280 type.go:168] "Request Body" body=""
	I1002 20:16:35.005004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:35.005345   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:35.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:16:35.504994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:35.505319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:35.505372   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:36.004925   32280 type.go:168] "Request Body" body=""
	I1002 20:16:36.004992   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:36.005316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:36.504877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:36.504954   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:36.505294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:37.005839   32280 type.go:168] "Request Body" body=""
	I1002 20:16:37.005910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:37.006248   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:37.505873   32280 type.go:168] "Request Body" body=""
	I1002 20:16:37.505941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:37.506266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:37.506318   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:38.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:16:38.005944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:38.006246   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:38.504902   32280 type.go:168] "Request Body" body=""
	I1002 20:16:38.504969   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:38.505303   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:39.004874   32280 type.go:168] "Request Body" body=""
	I1002 20:16:39.004947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:39.005260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:39.505046   32280 type.go:168] "Request Body" body=""
	I1002 20:16:39.505118   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:39.505463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:40.004989   32280 type.go:168] "Request Body" body=""
	I1002 20:16:40.005054   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:40.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:40.005393   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:40.505166   32280 type.go:168] "Request Body" body=""
	I1002 20:16:40.505235   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:40.505560   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:41.005152   32280 type.go:168] "Request Body" body=""
	I1002 20:16:41.005218   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:41.005554   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:41.505090   32280 type.go:168] "Request Body" body=""
	I1002 20:16:41.505158   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:41.505444   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:42.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:16:42.005138   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:42.005449   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:42.005504   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:42.505067   32280 type.go:168] "Request Body" body=""
	I1002 20:16:42.505134   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:42.505424   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:43.004978   32280 type.go:168] "Request Body" body=""
	I1002 20:16:43.005045   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:43.005360   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:43.504918   32280 type.go:168] "Request Body" body=""
	I1002 20:16:43.504994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:43.505315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:44.004897   32280 type.go:168] "Request Body" body=""
	I1002 20:16:44.004973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:44.005278   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:44.505052   32280 type.go:168] "Request Body" body=""
	I1002 20:16:44.505115   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:44.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:44.505478   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:45.004947   32280 type.go:168] "Request Body" body=""
	I1002 20:16:45.005019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:45.005322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:45.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:16:45.504993   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:45.505338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:46.004905   32280 type.go:168] "Request Body" body=""
	I1002 20:16:46.004979   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:46.005286   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:46.504835   32280 type.go:168] "Request Body" body=""
	I1002 20:16:46.504925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:46.505219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:47.005826   32280 type.go:168] "Request Body" body=""
	I1002 20:16:47.005892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:47.006200   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:47.006269   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:47.505816   32280 type.go:168] "Request Body" body=""
	I1002 20:16:47.505884   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:47.506197   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:48.005806   32280 type.go:168] "Request Body" body=""
	I1002 20:16:48.005870   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:48.006179   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:48.505827   32280 type.go:168] "Request Body" body=""
	I1002 20:16:48.505888   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:48.506194   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:49.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:16:49.005894   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:49.006203   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:49.504963   32280 type.go:168] "Request Body" body=""
	I1002 20:16:49.505034   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:49.505380   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:49.505431   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:50.004940   32280 type.go:168] "Request Body" body=""
	I1002 20:16:50.005017   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:50.005304   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:50.505134   32280 type.go:168] "Request Body" body=""
	I1002 20:16:50.505201   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:50.505531   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:51.005099   32280 type.go:168] "Request Body" body=""
	I1002 20:16:51.005174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:51.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:51.505049   32280 type.go:168] "Request Body" body=""
	I1002 20:16:51.505116   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:51.505426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:51.505479   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:52.005030   32280 type.go:168] "Request Body" body=""
	I1002 20:16:52.005095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:52.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:52.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:16:52.505051   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:52.505356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:53.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:16:53.005183   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:53.005527   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:53.504966   32280 type.go:168] "Request Body" body=""
	I1002 20:16:53.505048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:53.505395   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:54.004967   32280 type.go:168] "Request Body" body=""
	I1002 20:16:54.005059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:54.005352   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:54.005404   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:54.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:16:54.505052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:54.505382   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:55.005054   32280 type.go:168] "Request Body" body=""
	I1002 20:16:55.005127   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:55.005439   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:55.505031   32280 type.go:168] "Request Body" body=""
	I1002 20:16:55.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:55.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:56.004980   32280 type.go:168] "Request Body" body=""
	I1002 20:16:56.005046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:56.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:56.504959   32280 type.go:168] "Request Body" body=""
	I1002 20:16:56.505036   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:56.505388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:56.505446   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:57.004963   32280 type.go:168] "Request Body" body=""
	I1002 20:16:57.005036   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:57.005356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:57.504925   32280 type.go:168] "Request Body" body=""
	I1002 20:16:57.505000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:57.505300   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:58.004883   32280 type.go:168] "Request Body" body=""
	I1002 20:16:58.004958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:58.005266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:58.504830   32280 type.go:168] "Request Body" body=""
	I1002 20:16:58.504897   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:58.505217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:59.005867   32280 type.go:168] "Request Body" body=""
	I1002 20:16:59.005934   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:59.006232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:59.006289   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:59.505004   32280 type.go:168] "Request Body" body=""
	I1002 20:16:59.505077   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:59.505422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:00.004977   32280 type.go:168] "Request Body" body=""
	I1002 20:17:00.005041   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:00.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:00.505177   32280 type.go:168] "Request Body" body=""
	I1002 20:17:00.505244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:00.505577   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:01.005106   32280 type.go:168] "Request Body" body=""
	I1002 20:17:01.005177   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:01.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:01.505109   32280 type.go:168] "Request Body" body=""
	I1002 20:17:01.505191   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:01.505585   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:01.505680   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:02.005132   32280 type.go:168] "Request Body" body=""
	I1002 20:17:02.005199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:02.005526   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:02.505094   32280 type.go:168] "Request Body" body=""
	I1002 20:17:02.505168   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:02.505564   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:03.005060   32280 type.go:168] "Request Body" body=""
	I1002 20:17:03.005126   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:03.005440   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:03.504982   32280 type.go:168] "Request Body" body=""
	I1002 20:17:03.505055   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:03.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:04.004973   32280 type.go:168] "Request Body" body=""
	I1002 20:17:04.005038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:04.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:04.005404   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:04.505123   32280 type.go:168] "Request Body" body=""
	I1002 20:17:04.505251   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:04.505555   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:05.005089   32280 type.go:168] "Request Body" body=""
	I1002 20:17:05.005151   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:05.005451   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:05.505031   32280 type.go:168] "Request Body" body=""
	I1002 20:17:05.505104   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:05.505423   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:06.004950   32280 type.go:168] "Request Body" body=""
	I1002 20:17:06.005039   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:06.005333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:06.504958   32280 type.go:168] "Request Body" body=""
	I1002 20:17:06.505029   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:06.505369   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:06.505429   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:07.004923   32280 type.go:168] "Request Body" body=""
	I1002 20:17:07.004993   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:07.005301   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:07.504862   32280 type.go:168] "Request Body" body=""
	I1002 20:17:07.504930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:07.505255   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:08.004807   32280 type.go:168] "Request Body" body=""
	I1002 20:17:08.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:08.005186   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:08.505831   32280 type.go:168] "Request Body" body=""
	I1002 20:17:08.505899   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:08.506230   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:08.506299   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:09.005828   32280 type.go:168] "Request Body" body=""
	I1002 20:17:09.005891   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:09.006223   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:09.505024   32280 type.go:168] "Request Body" body=""
	I1002 20:17:09.505092   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:09.505459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:10.005013   32280 type.go:168] "Request Body" body=""
	I1002 20:17:10.005077   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:10.005388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:10.505140   32280 type.go:168] "Request Body" body=""
	I1002 20:17:10.505212   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:10.505598   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:11.005128   32280 type.go:168] "Request Body" body=""
	I1002 20:17:11.005195   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:11.005534   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:11.005597   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:11.505120   32280 type.go:168] "Request Body" body=""
	I1002 20:17:11.505189   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:11.505524   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:12.005153   32280 type.go:168] "Request Body" body=""
	I1002 20:17:12.005225   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:12.005562   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:12.505110   32280 type.go:168] "Request Body" body=""
	I1002 20:17:12.505174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:12.505532   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:13.005106   32280 type.go:168] "Request Body" body=""
	I1002 20:17:13.005174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:13.005476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:13.505007   32280 type.go:168] "Request Body" body=""
	I1002 20:17:13.505068   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:13.505435   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:13.505488   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:14.005005   32280 type.go:168] "Request Body" body=""
	I1002 20:17:14.005066   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:14.005383   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:14.505172   32280 type.go:168] "Request Body" body=""
	I1002 20:17:14.505244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:14.505573   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:15.005134   32280 type.go:168] "Request Body" body=""
	I1002 20:17:15.005205   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:15.005503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:15.505066   32280 type.go:168] "Request Body" body=""
	I1002 20:17:15.505141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:15.505446   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:15.505511   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:16.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:17:16.005080   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:16.005386   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:16.504935   32280 type.go:168] "Request Body" body=""
	I1002 20:17:16.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:16.505327   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:17.004855   32280 type.go:168] "Request Body" body=""
	I1002 20:17:17.004919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:17.005223   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:17.505899   32280 type.go:168] "Request Body" body=""
	I1002 20:17:17.505967   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:17.506302   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:17.506357   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:18.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:18.004943   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:18.005245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:18.504839   32280 type.go:168] "Request Body" body=""
	I1002 20:17:18.504915   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:18.505232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:19.005865   32280 type.go:168] "Request Body" body=""
	I1002 20:17:19.005947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:19.006269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:19.505022   32280 type.go:168] "Request Body" body=""
	I1002 20:17:19.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:19.505407   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:20.004991   32280 type.go:168] "Request Body" body=""
	I1002 20:17:20.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:20.005405   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:20.005466   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:20.505228   32280 type.go:168] "Request Body" body=""
	I1002 20:17:20.505297   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:20.505591   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:21.005210   32280 type.go:168] "Request Body" body=""
	I1002 20:17:21.005276   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:21.005584   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:21.505147   32280 type.go:168] "Request Body" body=""
	I1002 20:17:21.505208   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:21.505526   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:22.005059   32280 type.go:168] "Request Body" body=""
	I1002 20:17:22.005124   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:22.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:22.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:17:22.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:22.505347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:22.505407   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:23.004930   32280 type.go:168] "Request Body" body=""
	I1002 20:17:23.005006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:23.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:23.504881   32280 type.go:168] "Request Body" body=""
	I1002 20:17:23.504945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:23.505245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:24.005892   32280 type.go:168] "Request Body" body=""
	I1002 20:17:24.005969   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:24.006315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:24.505033   32280 type.go:168] "Request Body" body=""
	I1002 20:17:24.505105   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:24.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:24.505472   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:25.004948   32280 type.go:168] "Request Body" body=""
	I1002 20:17:25.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:25.005380   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:25.504947   32280 type.go:168] "Request Body" body=""
	I1002 20:17:25.505016   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:25.505308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:26.004843   32280 type.go:168] "Request Body" body=""
	I1002 20:17:26.004909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:26.005238   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:26.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:17:26.504873   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:26.505173   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:27.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:17:27.005931   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:27.006247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:27.006305   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:27.505850   32280 type.go:168] "Request Body" body=""
	I1002 20:17:27.505914   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:27.506242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:28.004933   32280 type.go:168] "Request Body" body=""
	I1002 20:17:28.005009   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:28.005342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:28.504866   32280 type.go:168] "Request Body" body=""
	I1002 20:17:28.505005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:28.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:29.004896   32280 type.go:168] "Request Body" body=""
	I1002 20:17:29.004966   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:29.005261   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:29.505004   32280 type.go:168] "Request Body" body=""
	I1002 20:17:29.505069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:29.505365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:29.505422   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:30.004927   32280 type.go:168] "Request Body" body=""
	I1002 20:17:30.004988   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:30.005290   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:30.504959   32280 type.go:168] "Request Body" body=""
	I1002 20:17:30.505027   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:30.505340   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:31.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:31.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:31.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:31.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:17:31.504956   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:31.505260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:32.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:32.004950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:32.005251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:32.005312   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:32.505895   32280 type.go:168] "Request Body" body=""
	I1002 20:17:32.505961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:32.506274   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:33.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:33.004958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:33.005280   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:33.504821   32280 type.go:168] "Request Body" body=""
	I1002 20:17:33.504892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:33.505232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:34.005931   32280 type.go:168] "Request Body" body=""
	I1002 20:17:34.006061   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:34.006376   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:34.006427   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:34.505046   32280 type.go:168] "Request Body" body=""
	I1002 20:17:34.505112   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:34.505397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:35.004981   32280 type.go:168] "Request Body" body=""
	I1002 20:17:35.005045   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:35.005370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:35.504929   32280 type.go:168] "Request Body" body=""
	I1002 20:17:35.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:35.505318   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:36.004980   32280 type.go:168] "Request Body" body=""
	I1002 20:17:36.005058   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:36.005394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:36.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:17:36.505060   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:36.505342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:36.505398   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:37.004903   32280 type.go:168] "Request Body" body=""
	I1002 20:17:37.004978   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:37.005282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:37.504878   32280 type.go:168] "Request Body" body=""
	I1002 20:17:37.504942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:37.505231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:38.005855   32280 type.go:168] "Request Body" body=""
	I1002 20:17:38.005918   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:38.006208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:38.505835   32280 type.go:168] "Request Body" body=""
	I1002 20:17:38.505904   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:38.506229   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:38.506296   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:39.004853   32280 type.go:168] "Request Body" body=""
	I1002 20:17:39.004944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:39.005263   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:39.505135   32280 type.go:168] "Request Body" body=""
	I1002 20:17:39.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:39.505615   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:40.005193   32280 type.go:168] "Request Body" body=""
	I1002 20:17:40.005282   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:40.005581   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:40.505135   32280 type.go:168] "Request Body" body=""
	I1002 20:17:40.505207   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:40.505537   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:41.005103   32280 type.go:168] "Request Body" body=""
	I1002 20:17:41.005165   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:41.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:41.005563   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:41.505063   32280 type.go:168] "Request Body" body=""
	I1002 20:17:41.505150   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:41.505490   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:42.005054   32280 type.go:168] "Request Body" body=""
	I1002 20:17:42.005160   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:42.005471   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:42.505019   32280 type.go:168] "Request Body" body=""
	I1002 20:17:42.505084   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:42.505402   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:43.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:43.005022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:43.005350   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:43.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:17:43.505007   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:43.505339   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:43.505393   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:44.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:17:44.005006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:44.005323   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:44.505096   32280 type.go:168] "Request Body" body=""
	I1002 20:17:44.505171   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:44.505478   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:45.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:17:45.005090   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:45.005399   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:45.504952   32280 type.go:168] "Request Body" body=""
	I1002 20:17:45.505012   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:45.505310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:46.004864   32280 type.go:168] "Request Body" body=""
	I1002 20:17:46.004951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:46.005294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:46.005355   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:46.504873   32280 type.go:168] "Request Body" body=""
	I1002 20:17:46.504940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:46.505244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:47.005848   32280 type.go:168] "Request Body" body=""
	I1002 20:17:47.005930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:47.006252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:47.504816   32280 type.go:168] "Request Body" body=""
	I1002 20:17:47.504905   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:47.505215   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:48.005846   32280 type.go:168] "Request Body" body=""
	I1002 20:17:48.005933   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:48.006242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:48.006300   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:48.505916   32280 type.go:168] "Request Body" body=""
	I1002 20:17:48.505980   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:48.506270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:49.004828   32280 type.go:168] "Request Body" body=""
	I1002 20:17:49.004910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:49.005240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:49.504935   32280 type.go:168] "Request Body" body=""
	I1002 20:17:49.505024   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:49.505373   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:50.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:50.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:50.005340   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:50.505078   32280 type.go:168] "Request Body" body=""
	I1002 20:17:50.505147   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:50.505479   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:50.505532   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:51.005024   32280 type.go:168] "Request Body" body=""
	I1002 20:17:51.005103   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:51.005420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:51.504998   32280 type.go:168] "Request Body" body=""
	I1002 20:17:51.505075   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:51.505410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:52.005000   32280 type.go:168] "Request Body" body=""
	I1002 20:17:52.005081   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:52.005428   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:52.505012   32280 type.go:168] "Request Body" body=""
	I1002 20:17:52.505100   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:52.505419   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:53.005015   32280 type.go:168] "Request Body" body=""
	I1002 20:17:53.005100   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:53.005438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:53.005495   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:53.504988   32280 type.go:168] "Request Body" body=""
	I1002 20:17:53.505059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:53.505385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:54.004971   32280 type.go:168] "Request Body" body=""
	I1002 20:17:54.005048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:54.005388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:54.505199   32280 type.go:168] "Request Body" body=""
	I1002 20:17:54.505286   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:54.505624   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:55.005199   32280 type.go:168] "Request Body" body=""
	I1002 20:17:55.005287   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:55.005639   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:55.005734   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:55.505238   32280 type.go:168] "Request Body" body=""
	I1002 20:17:55.505303   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:55.505621   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:56.005174   32280 type.go:168] "Request Body" body=""
	I1002 20:17:56.005258   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:56.005612   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:56.505166   32280 type.go:168] "Request Body" body=""
	I1002 20:17:56.505231   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:56.505523   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:57.005076   32280 type.go:168] "Request Body" body=""
	I1002 20:17:57.005156   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:57.005631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:57.505096   32280 type.go:168] "Request Body" body=""
	I1002 20:17:57.505167   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:57.505488   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:57.505554   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:58.005160   32280 type.go:168] "Request Body" body=""
	I1002 20:17:58.005227   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:58.005552   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:58.505084   32280 type.go:168] "Request Body" body=""
	I1002 20:17:58.505166   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:58.505512   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:59.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:17:59.005138   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:59.005430   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:59.505390   32280 type.go:168] "Request Body" body=""
	I1002 20:17:59.505459   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:59.505823   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:59.505890   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:00.005468   32280 type.go:168] "Request Body" body=""
	I1002 20:18:00.005540   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:00.005877   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:00.505768   32280 type.go:168] "Request Body" body=""
	I1002 20:18:00.505843   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:00.506177   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:01.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:18:01.005945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:01.006334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:01.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:18:01.504996   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:01.505321   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:02.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:18:02.005017   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:02.005334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:02.005385   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:02.504884   32280 type.go:168] "Request Body" body=""
	I1002 20:18:02.504963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:02.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:03.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:18:03.005005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:03.005356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:03.504932   32280 type.go:168] "Request Body" body=""
	I1002 20:18:03.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:03.505307   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:04.004878   32280 type.go:168] "Request Body" body=""
	I1002 20:18:04.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:04.005291   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:04.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:18:04.505131   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:04.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:04.505520   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:05.005008   32280 type.go:168] "Request Body" body=""
	I1002 20:18:05.005088   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:05.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:05.504977   32280 type.go:168] "Request Body" body=""
	I1002 20:18:05.505046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:05.505355   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:06.004890   32280 type.go:168] "Request Body" body=""
	I1002 20:18:06.004955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:06.005271   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:06.505878   32280 type.go:168] "Request Body" body=""
	I1002 20:18:06.505942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:06.506244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:06.506297   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:07.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:18:07.005943   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:07.006253   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:07.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:18:07.504964   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:07.505251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:08.004916   32280 type.go:168] "Request Body" body=""
	I1002 20:18:08.004981   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:08.005306   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:08.504856   32280 type.go:168] "Request Body" body=""
	I1002 20:18:08.504941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:08.505239   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:09.005880   32280 type.go:168] "Request Body" body=""
	I1002 20:18:09.005952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:09.006285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:09.006339   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:09.505080   32280 type.go:168] "Request Body" body=""
	I1002 20:18:09.505146   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:09.505447   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:10.005082   32280 type.go:168] "Request Body" body=""
	I1002 20:18:10.005147   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:10.005473   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:10.505242   32280 type.go:168] "Request Body" body=""
	I1002 20:18:10.505307   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:10.505606   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:11.005169   32280 type.go:168] "Request Body" body=""
	I1002 20:18:11.005243   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:11.005570   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:11.505121   32280 type.go:168] "Request Body" body=""
	I1002 20:18:11.505186   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:11.505487   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:11.505538   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:12.005071   32280 type.go:168] "Request Body" body=""
	I1002 20:18:12.005141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:12.005461   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:12.505817   32280 type.go:168] "Request Body" body=""
	I1002 20:18:12.505883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:12.506177   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:13.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:18:13.005887   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:13.006211   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:13.505873   32280 type.go:168] "Request Body" body=""
	I1002 20:18:13.505942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:13.506236   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:13.506287   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:14.004813   32280 type.go:168] "Request Body" body=""
	I1002 20:18:14.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:14.005208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:14.505838   32280 type.go:168] "Request Body" body=""
	I1002 20:18:14.505912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:14.506225   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:15.005871   32280 type.go:168] "Request Body" body=""
	I1002 20:18:15.005949   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:15.006278   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:15.504830   32280 type.go:168] "Request Body" body=""
	I1002 20:18:15.504900   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:15.505190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:16.004845   32280 type.go:168] "Request Body" body=""
	I1002 20:18:16.004935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:16.005267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:16.005321   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:16.504844   32280 type.go:168] "Request Body" body=""
	I1002 20:18:16.504915   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:16.505208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:17.004848   32280 type.go:168] "Request Body" body=""
	I1002 20:18:17.005199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:17.005523   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:17.505033   32280 type.go:168] "Request Body" body=""
	I1002 20:18:17.505107   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:17.505434   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:18.004982   32280 type.go:168] "Request Body" body=""
	I1002 20:18:18.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:18.005443   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:18.005498   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:18.505161   32280 type.go:168] "Request Body" body=""
	I1002 20:18:18.505228   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:18.505530   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:19.005238   32280 type.go:168] "Request Body" body=""
	I1002 20:18:19.005302   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:19.005626   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:19.505401   32280 type.go:168] "Request Body" body=""
	I1002 20:18:19.505466   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:19.505798   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:20.005591   32280 type.go:168] "Request Body" body=""
	I1002 20:18:20.005673   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:20.006000   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:20.006051   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:20.505823   32280 type.go:168] "Request Body" body=""
	I1002 20:18:20.505886   32280 node_ready.go:38] duration metric: took 6m0.001160736s for node "functional-753218" to be "Ready" ...
	I1002 20:18:20.508034   32280 out.go:203] 
	W1002 20:18:20.509328   32280 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:18:20.509341   32280 out.go:285] * 
	W1002 20:18:20.511008   32280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:18:20.512144   32280 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.321103858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.321491573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.322911304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.323405869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.337779996Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=95f1ede9-941f-4144-88f8-a404282372bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.338388539Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7925e8f1-8190-46db-ba21-210ddaa8dfad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339261167Z" level=info msg="createCtr: deleting container ID 01a007256b26260bbda1a485ac64ac3c89901e23abb5a27a0f834cf970bbb39d from idIndex" id=95f1ede9-941f-4144-88f8-a404282372bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339292438Z" level=info msg="createCtr: removing container 01a007256b26260bbda1a485ac64ac3c89901e23abb5a27a0f834cf970bbb39d" id=95f1ede9-941f-4144-88f8-a404282372bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339320859Z" level=info msg="createCtr: deleting container 01a007256b26260bbda1a485ac64ac3c89901e23abb5a27a0f834cf970bbb39d from storage" id=95f1ede9-941f-4144-88f8-a404282372bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339813064Z" level=info msg="createCtr: deleting container ID 3de785e23a0e2a9b688fd47d95f0e222abadb184c86c16208d5674e3ecc87423 from idIndex" id=7925e8f1-8190-46db-ba21-210ddaa8dfad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339845489Z" level=info msg="createCtr: removing container 3de785e23a0e2a9b688fd47d95f0e222abadb184c86c16208d5674e3ecc87423" id=7925e8f1-8190-46db-ba21-210ddaa8dfad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339873272Z" level=info msg="createCtr: deleting container 3de785e23a0e2a9b688fd47d95f0e222abadb184c86c16208d5674e3ecc87423 from storage" id=7925e8f1-8190-46db-ba21-210ddaa8dfad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.34258012Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_8f4d4ea1035e2535a9c472062bfdd7f7_0" id=95f1ede9-941f-4144-88f8-a404282372bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.342938703Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753218_kube-system_b25a71e49a335bbe853872de1b1e3093_0" id=7925e8f1-8190-46db-ba21-210ddaa8dfad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.313913897Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=d8ae3c90-d4ca-4ad6-873d-1994584e161b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.314705476Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7cd2779d-b010-4b8c-9573-780ae7bedbb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.315549863Z" level=info msg="Creating container: kube-system/etcd-functional-753218/etcd" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.315792146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.319001536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.319368187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.33516141Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.336600751Z" level=info msg="createCtr: deleting container ID f5ea9f185f346f3d3e3da1f5f6186ca0c1fd2f6c58678ae2aa18ebfc909aba4b from idIndex" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.336636077Z" level=info msg="createCtr: removing container f5ea9f185f346f3d3e3da1f5f6186ca0c1fd2f6c58678ae2aa18ebfc909aba4b" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.336695824Z" level=info msg="createCtr: deleting container f5ea9f185f346f3d3e3da1f5f6186ca0c1fd2f6c58678ae2aa18ebfc909aba4b from storage" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.33870937Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753218_kube-system_91f2b96cb3e3f380ded17f30e8d873bd_0" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:18:22.105993    4335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:22.106440    4335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:22.108031    4335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:22.108432    4335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:22.109963    4335 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:18:22 up  1:00,  0 user,  load average: 0.05, 0.05, 0.06
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:18:17 functional-753218 kubelet[1799]: E1002 20:18:17.342946    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:17 functional-753218 kubelet[1799]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(8f4d4ea1035e2535a9c472062bfdd7f7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:17 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:17 functional-753218 kubelet[1799]: E1002 20:18:17.342989    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="8f4d4ea1035e2535a9c472062bfdd7f7"
	Oct 02 20:18:17 functional-753218 kubelet[1799]: E1002 20:18:17.343140    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:18:17 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:17 functional-753218 kubelet[1799]:  > podSandboxID="de1cc60186f989d4e0a8994c95a3f2e5173970c97e595ad7db2d469e1551df14"
	Oct 02 20:18:17 functional-753218 kubelet[1799]: E1002 20:18:17.343209    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:17 functional-753218 kubelet[1799]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:17 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:17 functional-753218 kubelet[1799]: E1002 20:18:17.344344    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	Oct 02 20:18:18 functional-753218 kubelet[1799]: E1002 20:18:18.019553    1799 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-753218.186ac570b511d2a5\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac570b511d2a5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753218 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:08:12.306453157 +0000 UTC m=+0.389048074,LastTimestamp:2025-10-02 20:08:12.30766191 +0000 UTC m=+0.390256814,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:functional-753218,}"
	Oct 02 20:18:18 functional-753218 kubelet[1799]: E1002 20:18:18.313518    1799 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:18:18 functional-753218 kubelet[1799]: E1002 20:18:18.338994    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:18:18 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:18 functional-753218 kubelet[1799]:  > podSandboxID="65675f5fefd97e29be9e11728def45d5a2c472bac18f3ca682b57fda50e5abf7"
	Oct 02 20:18:18 functional-753218 kubelet[1799]: E1002 20:18:18.339099    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:18 functional-753218 kubelet[1799]:         container etcd start failed in pod etcd-functional-753218_kube-system(91f2b96cb3e3f380ded17f30e8d873bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:18 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:18 functional-753218 kubelet[1799]: E1002 20:18:18.339137    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753218" podUID="91f2b96cb3e3f380ded17f30e8d873bd"
	Oct 02 20:18:19 functional-753218 kubelet[1799]: E1002 20:18:19.371428    1799 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 20:18:19 functional-753218 kubelet[1799]: E1002 20:18:19.624437    1799 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 20:18:19 functional-753218 kubelet[1799]: E1002 20:18:19.986926    1799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:18:20 functional-753218 kubelet[1799]: I1002 20:18:20.185014    1799 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:18:20 functional-753218 kubelet[1799]: E1002 20:18:20.185332    1799 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (290.021286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (368.80s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-753218 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-753218 get po -A: exit status 1 (52.92259ms)

                                                
                                                
** stderr ** 
	E1002 20:18:23.011795   36321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:23.012090   36321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:23.013475   36321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:23.013732   36321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:23.015082   36321 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-753218 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1002 20:18:23.011795   36321 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 20:18:23.012090   36321 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 20:18:23.013475   36321 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 20:18:23.013732   36321 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 20:18:23.015082   36321 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-753218 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-753218 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (278.809955ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-961266                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-961266   │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ start   │ --download-only -p download-docker-213285 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-213285 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ -p download-docker-213285                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-213285 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ start   │ --download-only -p binary-mirror-331754 --alsologtostderr --binary-mirror http://127.0.0.1:42675 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-331754   │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ -p binary-mirror-331754                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-331754   │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ addons  │ disable dashboard -p addons-486748                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ addons  │ enable dashboard -p addons-486748                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ start   │ -p addons-486748 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ -p addons-486748                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-486748          │ jenkins │ v1.37.0 │ 02 Oct 25 19:55 UTC │ 02 Oct 25 19:55 UTC │
	│ start   │ -p nospam-547008 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-547008 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 19:55 UTC │                     │
	│ start   │ nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ delete  │ -p nospam-547008                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-547008          │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ start   │ -p functional-753218 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-753218      │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ -p functional-753218 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-753218      │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:12:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:12:14.161053   32280 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:12:14.161314   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:12:14.161324   32280 out.go:374] Setting ErrFile to fd 2...
	I1002 20:12:14.161329   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:12:14.161525   32280 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:12:14.161965   32280 out.go:368] Setting JSON to false
	I1002 20:12:14.162918   32280 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3283,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:12:14.163001   32280 start.go:140] virtualization: kvm guest
	I1002 20:12:14.165258   32280 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:12:14.166596   32280 notify.go:221] Checking for updates...
	I1002 20:12:14.166661   32280 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:12:14.168151   32280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:12:14.169781   32280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:14.170964   32280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:12:14.172159   32280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:12:14.173393   32280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:12:14.175005   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:14.175089   32280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:12:14.198042   32280 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:12:14.198110   32280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:12:14.249812   32280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:12:14.240278836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:12:14.249943   32280 docker.go:319] overlay module found
	I1002 20:12:14.251744   32280 out.go:179] * Using the docker driver based on existing profile
	I1002 20:12:14.252771   32280 start.go:306] selected driver: docker
	I1002 20:12:14.252788   32280 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:14.252894   32280 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:12:14.253012   32280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:12:14.302717   32280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:12:14.29341416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:12:14.303277   32280 cni.go:84] Creating CNI manager for ""
	I1002 20:12:14.303332   32280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:12:14.303374   32280 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:14.305248   32280 out.go:179] * Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	I1002 20:12:14.306703   32280 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:12:14.308110   32280 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:12:14.309231   32280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:12:14.309270   32280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:12:14.309292   32280 cache.go:59] Caching tarball of preloaded images
	I1002 20:12:14.309321   32280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:12:14.309392   32280 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:12:14.309404   32280 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:12:14.309506   32280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:12:14.328595   32280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:12:14.328612   32280 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:12:14.328641   32280 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:12:14.328685   32280 start.go:361] acquireMachinesLock for functional-753218: {Name:mk742badf6f1dbafca8397e398143e758831ae3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:12:14.328749   32280 start.go:365] duration metric: took 40.346µs to acquireMachinesLock for "functional-753218"
	I1002 20:12:14.328768   32280 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:12:14.328773   32280 fix.go:55] fixHost starting: 
	I1002 20:12:14.328978   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:14.345315   32280 fix.go:113] recreateIfNeeded on functional-753218: state=Running err=<nil>
	W1002 20:12:14.345339   32280 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:12:14.347103   32280 out.go:252] * Updating the running docker "functional-753218" container ...
	I1002 20:12:14.347127   32280 machine.go:93] provisionDockerMachine start ...
	I1002 20:12:14.347175   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.364778   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.365056   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.365071   32280 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:12:14.506481   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:12:14.506514   32280 ubuntu.go:182] provisioning hostname "functional-753218"
	I1002 20:12:14.506576   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.523646   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.523886   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.523904   32280 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753218 && echo "functional-753218" | sudo tee /etc/hostname
	I1002 20:12:14.674327   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:12:14.674412   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.691957   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.692191   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.692210   32280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753218/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:12:14.834109   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:12:14.834144   32280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:12:14.834205   32280 ubuntu.go:190] setting up certificates
	I1002 20:12:14.834219   32280 provision.go:84] configureAuth start
	I1002 20:12:14.834287   32280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:12:14.852021   32280 provision.go:143] copyHostCerts
	I1002 20:12:14.852056   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:12:14.852091   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:12:14.852111   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:12:14.852184   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:12:14.852289   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:12:14.852315   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:12:14.852322   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:12:14.852367   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:12:14.852431   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:12:14.852454   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:12:14.852460   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:12:14.852497   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:12:14.852565   32280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.functional-753218 san=[127.0.0.1 192.168.49.2 functional-753218 localhost minikube]
	I1002 20:12:14.908205   32280 provision.go:177] copyRemoteCerts
	I1002 20:12:14.908265   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:12:14.908316   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.925225   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.025356   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:12:15.025415   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:12:15.042012   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:12:15.042068   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:12:15.059080   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:12:15.059140   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:12:15.075501   32280 provision.go:87] duration metric: took 241.264617ms to configureAuth
	I1002 20:12:15.075530   32280 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:12:15.075723   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:15.075835   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.092499   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:15.092718   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:15.092740   32280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:12:15.350871   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:12:15.350899   32280 machine.go:96] duration metric: took 1.003764785s to provisionDockerMachine
	I1002 20:12:15.350913   32280 start.go:294] postStartSetup for "functional-753218" (driver="docker")
	I1002 20:12:15.350926   32280 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:12:15.350976   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:12:15.351010   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.368192   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.468976   32280 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:12:15.472512   32280 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 20:12:15.472527   32280 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 20:12:15.472540   32280 command_runner.go:130] > VERSION_ID="12"
	I1002 20:12:15.472545   32280 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 20:12:15.472553   32280 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 20:12:15.472556   32280 command_runner.go:130] > ID=debian
	I1002 20:12:15.472560   32280 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 20:12:15.472565   32280 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 20:12:15.472572   32280 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 20:12:15.472618   32280 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:12:15.472635   32280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:12:15.472666   32280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:12:15.472731   32280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:12:15.472806   32280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:12:15.472815   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:12:15.472889   32280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> hosts in /etc/test/nested/copy/12851
	I1002 20:12:15.472896   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> /etc/test/nested/copy/12851/hosts
	I1002 20:12:15.472925   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12851
	I1002 20:12:15.480384   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:12:15.496865   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts --> /etc/test/nested/copy/12851/hosts (40 bytes)
	I1002 20:12:15.513635   32280 start.go:297] duration metric: took 162.708522ms for postStartSetup
	I1002 20:12:15.513745   32280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:12:15.513794   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.530644   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.628445   32280 command_runner.go:130] > 39%
	I1002 20:12:15.628745   32280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:12:15.633076   32280 command_runner.go:130] > 179G
	I1002 20:12:15.633306   32280 fix.go:57] duration metric: took 1.304525715s for fixHost
	I1002 20:12:15.633325   32280 start.go:84] releasing machines lock for "functional-753218", held for 1.30456494s
	I1002 20:12:15.633398   32280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:12:15.650579   32280 ssh_runner.go:195] Run: cat /version.json
	I1002 20:12:15.650618   32280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:12:15.650631   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.650688   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.668938   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.669107   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.765770   32280 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 20:12:15.817112   32280 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 20:12:15.819166   32280 ssh_runner.go:195] Run: systemctl --version
	I1002 20:12:15.825335   32280 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 20:12:15.825364   32280 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 20:12:15.825559   32280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:12:15.861701   32280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 20:12:15.866192   32280 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 20:12:15.866262   32280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:12:15.866323   32280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:12:15.874084   32280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:12:15.874106   32280 start.go:496] detecting cgroup driver to use...
	I1002 20:12:15.874141   32280 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:12:15.874206   32280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:12:15.887803   32280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:12:15.899530   32280 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:12:15.899588   32280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:12:15.913378   32280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:12:15.925494   32280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:12:16.013036   32280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:12:16.099049   32280 docker.go:234] disabling docker service ...
	I1002 20:12:16.099135   32280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:12:16.112698   32280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:12:16.124592   32280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:12:16.212924   32280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:12:16.298302   32280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:12:16.310529   32280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:12:16.324186   32280 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 20:12:16.324212   32280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:12:16.324248   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.332999   32280 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:12:16.333067   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.341758   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.350162   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.358406   32280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:12:16.365887   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.374465   32280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.382513   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.390861   32280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:12:16.397800   32280 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 20:12:16.397864   32280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:12:16.404831   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:16.487603   32280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:12:19.404809   32280 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.917172928s)
	I1002 20:12:19.404840   32280 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:12:19.404889   32280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:12:19.408896   32280 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 20:12:19.408924   32280 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 20:12:19.408935   32280 command_runner.go:130] > Device: 0,59	Inode: 3843        Links: 1
	I1002 20:12:19.408947   32280 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:12:19.408956   32280 command_runner.go:130] > Access: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408964   32280 command_runner.go:130] > Modify: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408977   32280 command_runner.go:130] > Change: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408989   32280 command_runner.go:130] >  Birth: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.409044   32280 start.go:564] Will wait 60s for crictl version
	I1002 20:12:19.409092   32280 ssh_runner.go:195] Run: which crictl
	I1002 20:12:19.412689   32280 command_runner.go:130] > /usr/local/bin/crictl
	I1002 20:12:19.412744   32280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:12:19.436957   32280 command_runner.go:130] > Version:  0.1.0
	I1002 20:12:19.436979   32280 command_runner.go:130] > RuntimeName:  cri-o
	I1002 20:12:19.436984   32280 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 20:12:19.436989   32280 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 20:12:19.437005   32280 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:12:19.437072   32280 ssh_runner.go:195] Run: crio --version
	I1002 20:12:19.464211   32280 command_runner.go:130] > crio version 1.34.1
	I1002 20:12:19.464228   32280 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:12:19.464234   32280 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:12:19.464240   32280 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:12:19.464244   32280 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:12:19.464248   32280 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:12:19.464252   32280 command_runner.go:130] >    Compiler:       gc
	I1002 20:12:19.464257   32280 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:12:19.464261   32280 command_runner.go:130] >    Linkmode:       static
	I1002 20:12:19.464264   32280 command_runner.go:130] >    BuildTags:
	I1002 20:12:19.464267   32280 command_runner.go:130] >      static
	I1002 20:12:19.464275   32280 command_runner.go:130] >      netgo
	I1002 20:12:19.464279   32280 command_runner.go:130] >      osusergo
	I1002 20:12:19.464283   32280 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:12:19.464288   32280 command_runner.go:130] >      seccomp
	I1002 20:12:19.464291   32280 command_runner.go:130] >      apparmor
	I1002 20:12:19.464298   32280 command_runner.go:130] >      selinux
	I1002 20:12:19.464302   32280 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:12:19.464306   32280 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:12:19.464310   32280 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:12:19.464385   32280 ssh_runner.go:195] Run: crio --version
	I1002 20:12:19.491564   32280 command_runner.go:130] > crio version 1.34.1
	I1002 20:12:19.491590   32280 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:12:19.491596   32280 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:12:19.491601   32280 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:12:19.491605   32280 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:12:19.491609   32280 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:12:19.491612   32280 command_runner.go:130] >    Compiler:       gc
	I1002 20:12:19.491619   32280 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:12:19.491623   32280 command_runner.go:130] >    Linkmode:       static
	I1002 20:12:19.491627   32280 command_runner.go:130] >    BuildTags:
	I1002 20:12:19.491630   32280 command_runner.go:130] >      static
	I1002 20:12:19.491634   32280 command_runner.go:130] >      netgo
	I1002 20:12:19.491637   32280 command_runner.go:130] >      osusergo
	I1002 20:12:19.491641   32280 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:12:19.491665   32280 command_runner.go:130] >      seccomp
	I1002 20:12:19.491671   32280 command_runner.go:130] >      apparmor
	I1002 20:12:19.491681   32280 command_runner.go:130] >      selinux
	I1002 20:12:19.491687   32280 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:12:19.491700   32280 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:12:19.491719   32280 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:12:19.493718   32280 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:12:19.495253   32280 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:12:19.512253   32280 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:12:19.516262   32280 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 20:12:19.516341   32280 kubeadm.go:883] updating cluster {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:12:19.516485   32280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:12:19.516543   32280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:12:19.546693   32280 command_runner.go:130] > {
	I1002 20:12:19.546715   32280 command_runner.go:130] >   "images":  [
	I1002 20:12:19.546721   32280 command_runner.go:130] >     {
	I1002 20:12:19.546728   32280 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:12:19.546732   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.546739   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:12:19.546745   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546774   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.546794   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:12:19.546808   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:12:19.546815   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546819   32280 command_runner.go:130] >       "size":  "109379124",
	I1002 20:12:19.546826   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.546835   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.546843   32280 command_runner.go:130] >     },
	I1002 20:12:19.546850   32280 command_runner.go:130] >     {
	I1002 20:12:19.546862   32280 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:12:19.546873   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.546881   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:12:19.546890   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546896   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.546909   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:12:19.546920   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:12:19.546937   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546947   32280 command_runner.go:130] >       "size":  "31470524",
	I1002 20:12:19.546954   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.546966   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.546972   32280 command_runner.go:130] >     },
	I1002 20:12:19.546979   32280 command_runner.go:130] >     {
	I1002 20:12:19.546989   32280 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:12:19.547010   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547022   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:12:19.547032   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547039   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547053   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:12:19.547065   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:12:19.547073   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547080   32280 command_runner.go:130] >       "size":  "76103547",
	I1002 20:12:19.547087   32280 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:12:19.547091   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547094   32280 command_runner.go:130] >     },
	I1002 20:12:19.547100   32280 command_runner.go:130] >     {
	I1002 20:12:19.547113   32280 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:12:19.547119   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547129   32280 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:12:19.547135   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547144   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547154   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:12:19.547167   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:12:19.547176   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547182   32280 command_runner.go:130] >       "size":  "195976448",
	I1002 20:12:19.547187   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547192   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547201   32280 command_runner.go:130] >       },
	I1002 20:12:19.547217   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547228   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547233   32280 command_runner.go:130] >     },
	I1002 20:12:19.547242   32280 command_runner.go:130] >     {
	I1002 20:12:19.547252   32280 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:12:19.547261   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547269   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:12:19.547276   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547281   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547301   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:12:19.547316   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:12:19.547321   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547331   32280 command_runner.go:130] >       "size":  "89046001",
	I1002 20:12:19.547337   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547346   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547352   32280 command_runner.go:130] >       },
	I1002 20:12:19.547361   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547368   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547376   32280 command_runner.go:130] >     },
	I1002 20:12:19.547380   32280 command_runner.go:130] >     {
	I1002 20:12:19.547390   32280 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:12:19.547396   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547407   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:12:19.547413   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547423   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547435   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:12:19.547451   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:12:19.547459   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547466   32280 command_runner.go:130] >       "size":  "76004181",
	I1002 20:12:19.547474   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547480   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547489   32280 command_runner.go:130] >       },
	I1002 20:12:19.547495   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547507   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547512   32280 command_runner.go:130] >     },
	I1002 20:12:19.547517   32280 command_runner.go:130] >     {
	I1002 20:12:19.547527   32280 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:12:19.547534   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547541   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:12:19.547546   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547552   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547561   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:12:19.547582   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:12:19.547592   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547599   32280 command_runner.go:130] >       "size":  "73138073",
	I1002 20:12:19.547606   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547615   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547624   32280 command_runner.go:130] >     },
	I1002 20:12:19.547629   32280 command_runner.go:130] >     {
	I1002 20:12:19.547641   32280 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:12:19.547658   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547667   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:12:19.547673   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547683   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547693   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:12:19.547720   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:12:19.547729   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547733   32280 command_runner.go:130] >       "size":  "53844823",
	I1002 20:12:19.547737   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547743   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547752   32280 command_runner.go:130] >       },
	I1002 20:12:19.547758   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547768   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547775   32280 command_runner.go:130] >     },
	I1002 20:12:19.547782   32280 command_runner.go:130] >     {
	I1002 20:12:19.547794   32280 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:12:19.547804   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547814   32280 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.547820   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547825   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547839   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:12:19.547853   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:12:19.547861   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547867   32280 command_runner.go:130] >       "size":  "742092",
	I1002 20:12:19.547876   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547887   32280 command_runner.go:130] >         "value":  "65535"
	I1002 20:12:19.547894   32280 command_runner.go:130] >       },
	I1002 20:12:19.547900   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547906   32280 command_runner.go:130] >       "pinned":  true
	I1002 20:12:19.547910   32280 command_runner.go:130] >     }
	I1002 20:12:19.547917   32280 command_runner.go:130] >   ]
	I1002 20:12:19.547924   32280 command_runner.go:130] > }
	I1002 20:12:19.548472   32280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:12:19.548485   32280 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:12:19.548524   32280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:12:19.570809   32280 command_runner.go:130] > {
	I1002 20:12:19.570828   32280 command_runner.go:130] >   "images":  [
	I1002 20:12:19.570831   32280 command_runner.go:130] >     {
	I1002 20:12:19.570839   32280 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:12:19.570844   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.570849   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:12:19.570853   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570857   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.570864   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:12:19.570871   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:12:19.570877   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570882   32280 command_runner.go:130] >       "size":  "109379124",
	I1002 20:12:19.570889   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.570902   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.570908   32280 command_runner.go:130] >     },
	I1002 20:12:19.570914   32280 command_runner.go:130] >     {
	I1002 20:12:19.570922   32280 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:12:19.570928   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.570932   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:12:19.570938   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570941   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.570948   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:12:19.570958   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:12:19.570964   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570971   32280 command_runner.go:130] >       "size":  "31470524",
	I1002 20:12:19.570976   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.570985   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.570990   32280 command_runner.go:130] >     },
	I1002 20:12:19.570993   32280 command_runner.go:130] >     {
	I1002 20:12:19.571001   32280 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:12:19.571005   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571012   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:12:19.571016   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571021   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571028   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:12:19.571037   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:12:19.571043   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571047   32280 command_runner.go:130] >       "size":  "76103547",
	I1002 20:12:19.571050   32280 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:12:19.571056   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571059   32280 command_runner.go:130] >     },
	I1002 20:12:19.571065   32280 command_runner.go:130] >     {
	I1002 20:12:19.571071   32280 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:12:19.571077   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571081   32280 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:12:19.571087   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571091   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571099   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:12:19.571108   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:12:19.571113   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571117   32280 command_runner.go:130] >       "size":  "195976448",
	I1002 20:12:19.571122   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571126   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571132   32280 command_runner.go:130] >       },
	I1002 20:12:19.571139   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571145   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571152   32280 command_runner.go:130] >     },
	I1002 20:12:19.571157   32280 command_runner.go:130] >     {
	I1002 20:12:19.571163   32280 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:12:19.571169   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571173   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:12:19.571179   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571183   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571192   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:12:19.571201   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:12:19.571207   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571211   32280 command_runner.go:130] >       "size":  "89046001",
	I1002 20:12:19.571216   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571220   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571226   32280 command_runner.go:130] >       },
	I1002 20:12:19.571231   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571234   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571237   32280 command_runner.go:130] >     },
	I1002 20:12:19.571242   32280 command_runner.go:130] >     {
	I1002 20:12:19.571249   32280 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:12:19.571255   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571260   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:12:19.571265   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571269   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571276   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:12:19.571286   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:12:19.571292   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571295   32280 command_runner.go:130] >       "size":  "76004181",
	I1002 20:12:19.571301   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571305   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571310   32280 command_runner.go:130] >       },
	I1002 20:12:19.571314   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571318   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571323   32280 command_runner.go:130] >     },
	I1002 20:12:19.571327   32280 command_runner.go:130] >     {
	I1002 20:12:19.571335   32280 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:12:19.571339   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571349   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:12:19.571355   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571359   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571367   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:12:19.571376   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:12:19.571382   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571386   32280 command_runner.go:130] >       "size":  "73138073",
	I1002 20:12:19.571393   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571397   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571402   32280 command_runner.go:130] >     },
	I1002 20:12:19.571405   32280 command_runner.go:130] >     {
	I1002 20:12:19.571410   32280 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:12:19.571414   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571418   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:12:19.571422   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571425   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571431   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:12:19.571446   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:12:19.571455   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571461   32280 command_runner.go:130] >       "size":  "53844823",
	I1002 20:12:19.571469   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571474   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571482   32280 command_runner.go:130] >       },
	I1002 20:12:19.571488   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571495   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571498   32280 command_runner.go:130] >     },
	I1002 20:12:19.571504   32280 command_runner.go:130] >     {
	I1002 20:12:19.571510   32280 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:12:19.571516   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571520   32280 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.571526   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571530   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571542   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:12:19.571552   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:12:19.571556   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571562   32280 command_runner.go:130] >       "size":  "742092",
	I1002 20:12:19.571565   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571571   32280 command_runner.go:130] >         "value":  "65535"
	I1002 20:12:19.571575   32280 command_runner.go:130] >       },
	I1002 20:12:19.571581   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571585   32280 command_runner.go:130] >       "pinned":  true
	I1002 20:12:19.571590   32280 command_runner.go:130] >     }
	I1002 20:12:19.571593   32280 command_runner.go:130] >   ]
	I1002 20:12:19.571598   32280 command_runner.go:130] > }
	I1002 20:12:19.572597   32280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:12:19.572614   32280 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:12:19.572621   32280 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:12:19.572734   32280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:12:19.572796   32280 ssh_runner.go:195] Run: crio config
	I1002 20:12:19.612615   32280 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 20:12:19.612638   32280 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 20:12:19.612664   32280 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 20:12:19.612669   32280 command_runner.go:130] > #
	I1002 20:12:19.612689   32280 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 20:12:19.612698   32280 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 20:12:19.612709   32280 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 20:12:19.612721   32280 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 20:12:19.612728   32280 command_runner.go:130] > # reload'.
	I1002 20:12:19.612738   32280 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 20:12:19.612748   32280 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 20:12:19.612758   32280 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 20:12:19.612768   32280 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 20:12:19.612773   32280 command_runner.go:130] > [crio]
	I1002 20:12:19.612785   32280 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 20:12:19.612796   32280 command_runner.go:130] > # containers images, in this directory.
	I1002 20:12:19.612808   32280 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 20:12:19.612821   32280 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 20:12:19.612828   32280 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 20:12:19.612841   32280 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 20:12:19.612855   32280 command_runner.go:130] > # imagestore = ""
	I1002 20:12:19.612864   32280 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 20:12:19.612878   32280 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 20:12:19.612885   32280 command_runner.go:130] > # storage_driver = "overlay"
	I1002 20:12:19.612895   32280 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 20:12:19.612905   32280 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 20:12:19.612914   32280 command_runner.go:130] > # storage_option = [
	I1002 20:12:19.612917   32280 command_runner.go:130] > # ]
	I1002 20:12:19.612923   32280 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 20:12:19.612931   32280 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 20:12:19.612941   32280 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 20:12:19.612950   32280 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 20:12:19.612959   32280 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 20:12:19.612970   32280 command_runner.go:130] > # always happen on a node reboot
	I1002 20:12:19.612977   32280 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 20:12:19.612994   32280 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 20:12:19.613004   32280 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 20:12:19.613009   32280 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 20:12:19.613016   32280 command_runner.go:130] > # version_file_persist = ""
	I1002 20:12:19.613025   32280 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 20:12:19.613033   32280 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 20:12:19.613041   32280 command_runner.go:130] > # internal_wipe = true
	I1002 20:12:19.613054   32280 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 20:12:19.613066   32280 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 20:12:19.613075   32280 command_runner.go:130] > # internal_repair = true
	I1002 20:12:19.613083   32280 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 20:12:19.613095   32280 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 20:12:19.613113   32280 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 20:12:19.613120   32280 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 20:12:19.613129   32280 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 20:12:19.613134   32280 command_runner.go:130] > [crio.api]
	I1002 20:12:19.613142   32280 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 20:12:19.613150   32280 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 20:12:19.613162   32280 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 20:12:19.613173   32280 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 20:12:19.613185   32280 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 20:12:19.613197   32280 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 20:12:19.613204   32280 command_runner.go:130] > # stream_port = "0"
	I1002 20:12:19.613213   32280 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 20:12:19.613222   32280 command_runner.go:130] > # stream_enable_tls = false
	I1002 20:12:19.613231   32280 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 20:12:19.613238   32280 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 20:12:19.613248   32280 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 20:12:19.613260   32280 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 20:12:19.613266   32280 command_runner.go:130] > # stream_tls_cert = ""
	I1002 20:12:19.613274   32280 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 20:12:19.613292   32280 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 20:12:19.613301   32280 command_runner.go:130] > # stream_tls_key = ""
	I1002 20:12:19.613309   32280 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 20:12:19.613323   32280 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 20:12:19.613331   32280 command_runner.go:130] > # automatically pick up the changes.
	I1002 20:12:19.613340   32280 command_runner.go:130] > # stream_tls_ca = ""
	I1002 20:12:19.613394   32280 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:12:19.613408   32280 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 20:12:19.613420   32280 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:12:19.613430   32280 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 20:12:19.613440   32280 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 20:12:19.613452   32280 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 20:12:19.613458   32280 command_runner.go:130] > [crio.runtime]
	I1002 20:12:19.613469   32280 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 20:12:19.613481   32280 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 20:12:19.613487   32280 command_runner.go:130] > # "nofile=1024:2048"
	I1002 20:12:19.613500   32280 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 20:12:19.613508   32280 command_runner.go:130] > # default_ulimits = [
	I1002 20:12:19.613514   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613526   32280 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 20:12:19.613532   32280 command_runner.go:130] > # no_pivot = false
	I1002 20:12:19.613543   32280 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 20:12:19.613554   32280 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 20:12:19.613564   32280 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 20:12:19.613573   32280 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 20:12:19.613584   32280 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 20:12:19.613594   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:12:19.613603   32280 command_runner.go:130] > # conmon = ""
	I1002 20:12:19.613611   32280 command_runner.go:130] > # Cgroup setting for conmon
	I1002 20:12:19.613625   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 20:12:19.613632   32280 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 20:12:19.613642   32280 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 20:12:19.613664   32280 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 20:12:19.613682   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:12:19.613692   32280 command_runner.go:130] > # conmon_env = [
	I1002 20:12:19.613698   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613710   32280 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 20:12:19.613720   32280 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 20:12:19.613729   32280 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 20:12:19.613739   32280 command_runner.go:130] > # default_env = [
	I1002 20:12:19.613746   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613758   32280 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 20:12:19.613769   32280 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 20:12:19.613778   32280 command_runner.go:130] > # selinux = false
	I1002 20:12:19.613788   32280 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 20:12:19.613803   32280 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 20:12:19.613814   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613822   32280 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:12:19.613835   32280 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 20:12:19.613846   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613852   32280 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 20:12:19.613865   32280 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 20:12:19.613878   32280 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 20:12:19.613890   32280 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 20:12:19.613899   32280 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 20:12:19.613908   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613917   32280 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 20:12:19.613926   32280 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 20:12:19.613937   32280 command_runner.go:130] > # the cgroup blockio controller.
	I1002 20:12:19.613944   32280 command_runner.go:130] > # blockio_config_file = ""
	I1002 20:12:19.613958   32280 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 20:12:19.613965   32280 command_runner.go:130] > # blockio parameters.
	I1002 20:12:19.613974   32280 command_runner.go:130] > # blockio_reload = false
	I1002 20:12:19.613983   32280 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 20:12:19.613994   32280 command_runner.go:130] > # irqbalance daemon.
	I1002 20:12:19.614002   32280 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 20:12:19.614013   32280 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 20:12:19.614023   32280 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 20:12:19.614037   32280 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 20:12:19.614048   32280 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 20:12:19.614061   32280 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 20:12:19.614068   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.614077   32280 command_runner.go:130] > # rdt_config_file = ""
	I1002 20:12:19.614085   32280 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 20:12:19.614095   32280 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 20:12:19.614104   32280 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 20:12:19.614113   32280 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 20:12:19.614127   32280 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 20:12:19.614139   32280 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 20:12:19.614147   32280 command_runner.go:130] > # will be added.
	I1002 20:12:19.614155   32280 command_runner.go:130] > # default_capabilities = [
	I1002 20:12:19.614163   32280 command_runner.go:130] > # 	"CHOWN",
	I1002 20:12:19.614170   32280 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 20:12:19.614177   32280 command_runner.go:130] > # 	"FSETID",
	I1002 20:12:19.614182   32280 command_runner.go:130] > # 	"FOWNER",
	I1002 20:12:19.614187   32280 command_runner.go:130] > # 	"SETGID",
	I1002 20:12:19.614210   32280 command_runner.go:130] > # 	"SETUID",
	I1002 20:12:19.614214   32280 command_runner.go:130] > # 	"SETPCAP",
	I1002 20:12:19.614219   32280 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 20:12:19.614223   32280 command_runner.go:130] > # 	"KILL",
	I1002 20:12:19.614227   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614236   32280 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 20:12:19.614243   32280 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 20:12:19.614248   32280 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 20:12:19.614256   32280 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 20:12:19.614265   32280 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:12:19.614271   32280 command_runner.go:130] > default_sysctls = [
	I1002 20:12:19.614279   32280 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 20:12:19.614284   32280 command_runner.go:130] > ]
	I1002 20:12:19.614291   32280 command_runner.go:130] > # List of devices on the host that a
	I1002 20:12:19.614299   32280 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 20:12:19.614308   32280 command_runner.go:130] > # allowed_devices = [
	I1002 20:12:19.614313   32280 command_runner.go:130] > # 	"/dev/fuse",
	I1002 20:12:19.614321   32280 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 20:12:19.614327   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614335   32280 command_runner.go:130] > # List of additional devices. specified as
	I1002 20:12:19.614349   32280 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 20:12:19.614359   32280 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 20:12:19.614368   32280 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:12:19.614376   32280 command_runner.go:130] > # additional_devices = [
	I1002 20:12:19.614381   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614388   32280 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 20:12:19.614394   32280 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 20:12:19.614398   32280 command_runner.go:130] > # 	"/etc/cdi",
	I1002 20:12:19.614402   32280 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 20:12:19.614404   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614410   32280 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 20:12:19.614416   32280 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 20:12:19.614420   32280 command_runner.go:130] > # Defaults to false.
	I1002 20:12:19.614424   32280 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 20:12:19.614432   32280 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 20:12:19.614438   32280 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 20:12:19.614441   32280 command_runner.go:130] > # hooks_dir = [
	I1002 20:12:19.614445   32280 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 20:12:19.614449   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614454   32280 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 20:12:19.614462   32280 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 20:12:19.614467   32280 command_runner.go:130] > # its default mounts from the following two files:
	I1002 20:12:19.614471   32280 command_runner.go:130] > #
	I1002 20:12:19.614476   32280 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 20:12:19.614484   32280 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 20:12:19.614489   32280 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 20:12:19.614494   32280 command_runner.go:130] > #
	I1002 20:12:19.614500   32280 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 20:12:19.614506   32280 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 20:12:19.614514   32280 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 20:12:19.614519   32280 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 20:12:19.614524   32280 command_runner.go:130] > #
	I1002 20:12:19.614528   32280 command_runner.go:130] > # default_mounts_file = ""
	I1002 20:12:19.614532   32280 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 20:12:19.614539   32280 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 20:12:19.614545   32280 command_runner.go:130] > # pids_limit = -1
	I1002 20:12:19.614551   32280 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 20:12:19.614559   32280 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 20:12:19.614564   32280 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 20:12:19.614572   32280 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 20:12:19.614578   32280 command_runner.go:130] > # log_size_max = -1
	I1002 20:12:19.614716   32280 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 20:12:19.614727   32280 command_runner.go:130] > # log_to_journald = false
	I1002 20:12:19.614733   32280 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 20:12:19.614738   32280 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 20:12:19.614745   32280 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 20:12:19.614750   32280 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 20:12:19.614757   32280 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 20:12:19.614761   32280 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 20:12:19.614766   32280 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 20:12:19.614772   32280 command_runner.go:130] > # read_only = false
	I1002 20:12:19.614777   32280 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 20:12:19.614785   32280 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 20:12:19.614789   32280 command_runner.go:130] > # live configuration reload.
	I1002 20:12:19.614795   32280 command_runner.go:130] > # log_level = "info"
	I1002 20:12:19.614800   32280 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 20:12:19.614807   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.614811   32280 command_runner.go:130] > # log_filter = ""
	I1002 20:12:19.614817   32280 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 20:12:19.614825   32280 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 20:12:19.614829   32280 command_runner.go:130] > # separated by comma.
	I1002 20:12:19.614839   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614846   32280 command_runner.go:130] > # uid_mappings = ""
	I1002 20:12:19.614851   32280 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 20:12:19.614859   32280 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 20:12:19.614863   32280 command_runner.go:130] > # separated by comma.
	I1002 20:12:19.614873   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614877   32280 command_runner.go:130] > # gid_mappings = ""
	I1002 20:12:19.614884   32280 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 20:12:19.614890   32280 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:12:19.614898   32280 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:12:19.614905   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614909   32280 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 20:12:19.614916   32280 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 20:12:19.614924   32280 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:12:19.614931   32280 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:12:19.614940   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614944   32280 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 20:12:19.614949   32280 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 20:12:19.614959   32280 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 20:12:19.614964   32280 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 20:12:19.614970   32280 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 20:12:19.614975   32280 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 20:12:19.614983   32280 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 20:12:19.614988   32280 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 20:12:19.614993   32280 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 20:12:19.614999   32280 command_runner.go:130] > # drop_infra_ctr = true
	I1002 20:12:19.615004   32280 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 20:12:19.615009   32280 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 20:12:19.615018   32280 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 20:12:19.615024   32280 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 20:12:19.615031   32280 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 20:12:19.615038   32280 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 20:12:19.615044   32280 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 20:12:19.615052   32280 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 20:12:19.615055   32280 command_runner.go:130] > # shared_cpuset = ""
	I1002 20:12:19.615063   32280 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 20:12:19.615068   32280 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 20:12:19.615073   32280 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 20:12:19.615080   32280 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 20:12:19.615086   32280 command_runner.go:130] > # pinns_path = ""
	I1002 20:12:19.615090   32280 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 20:12:19.615098   32280 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 20:12:19.615102   32280 command_runner.go:130] > # enable_criu_support = true
	I1002 20:12:19.615111   32280 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 20:12:19.615116   32280 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 20:12:19.615123   32280 command_runner.go:130] > # enable_pod_events = false
	I1002 20:12:19.615128   32280 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 20:12:19.615135   32280 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 20:12:19.615139   32280 command_runner.go:130] > # default_runtime = "crun"
	I1002 20:12:19.615146   32280 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 20:12:19.615152   32280 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 20:12:19.615161   32280 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 20:12:19.615168   32280 command_runner.go:130] > # creation as a file is not desired either.
	I1002 20:12:19.615175   32280 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 20:12:19.615182   32280 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 20:12:19.615187   32280 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 20:12:19.615190   32280 command_runner.go:130] > # ]
	I1002 20:12:19.615195   32280 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 20:12:19.615201   32280 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 20:12:19.615207   32280 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 20:12:19.615214   32280 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 20:12:19.615216   32280 command_runner.go:130] > #
	I1002 20:12:19.615221   32280 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 20:12:19.615227   32280 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 20:12:19.615231   32280 command_runner.go:130] > # runtime_type = "oci"
	I1002 20:12:19.615237   32280 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 20:12:19.615241   32280 command_runner.go:130] > # inherit_default_runtime = false
	I1002 20:12:19.615246   32280 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 20:12:19.615252   32280 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 20:12:19.615256   32280 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 20:12:19.615262   32280 command_runner.go:130] > # monitor_env = []
	I1002 20:12:19.615266   32280 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 20:12:19.615270   32280 command_runner.go:130] > # allowed_annotations = []
	I1002 20:12:19.615278   32280 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 20:12:19.615282   32280 command_runner.go:130] > # no_sync_log = false
	I1002 20:12:19.615288   32280 command_runner.go:130] > # default_annotations = {}
	I1002 20:12:19.615293   32280 command_runner.go:130] > # stream_websockets = false
	I1002 20:12:19.615299   32280 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:12:19.615333   32280 command_runner.go:130] > # Where:
	I1002 20:12:19.615343   32280 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 20:12:19.615349   32280 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 20:12:19.615354   32280 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 20:12:19.615363   32280 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 20:12:19.615366   32280 command_runner.go:130] > #   in $PATH.
	I1002 20:12:19.615375   32280 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 20:12:19.615380   32280 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 20:12:19.615387   32280 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 20:12:19.615391   32280 command_runner.go:130] > #   state.
	I1002 20:12:19.615400   32280 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 20:12:19.615413   32280 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 20:12:19.615421   32280 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 20:12:19.615428   32280 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 20:12:19.615435   32280 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 20:12:19.615441   32280 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 20:12:19.615446   32280 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 20:12:19.615452   32280 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 20:12:19.615458   32280 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 20:12:19.615465   32280 command_runner.go:130] > #   The currently recognized values are:
	I1002 20:12:19.615470   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 20:12:19.615479   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 20:12:19.615485   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 20:12:19.615490   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 20:12:19.615499   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 20:12:19.615505   32280 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 20:12:19.615514   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 20:12:19.615521   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 20:12:19.615529   32280 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 20:12:19.615534   32280 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 20:12:19.615541   32280 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 20:12:19.615549   32280 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 20:12:19.615555   32280 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 20:12:19.615564   32280 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 20:12:19.615569   32280 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 20:12:19.615579   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 20:12:19.615586   32280 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 20:12:19.615589   32280 command_runner.go:130] > #   deprecated option "conmon".
	I1002 20:12:19.615596   32280 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 20:12:19.615601   32280 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 20:12:19.615607   32280 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 20:12:19.615614   32280 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 20:12:19.615621   32280 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 20:12:19.615628   32280 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 20:12:19.615634   32280 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 20:12:19.615638   32280 command_runner.go:130] > #   conmon-rs by using:
	I1002 20:12:19.615656   32280 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 20:12:19.615668   32280 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 20:12:19.615682   32280 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 20:12:19.615690   32280 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 20:12:19.615695   32280 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 20:12:19.615704   32280 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 20:12:19.615712   32280 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 20:12:19.615720   32280 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 20:12:19.615731   32280 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 20:12:19.615747   32280 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 20:12:19.615756   32280 command_runner.go:130] > #   when a machine crash happens.
	I1002 20:12:19.615765   32280 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 20:12:19.615774   32280 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 20:12:19.615784   32280 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 20:12:19.615788   32280 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 20:12:19.615797   32280 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 20:12:19.615804   32280 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 20:12:19.615810   32280 command_runner.go:130] > #
	I1002 20:12:19.615818   32280 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 20:12:19.615826   32280 command_runner.go:130] > #
	I1002 20:12:19.615838   32280 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 20:12:19.615850   32280 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 20:12:19.615854   32280 command_runner.go:130] > #
	I1002 20:12:19.615860   32280 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 20:12:19.615868   32280 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 20:12:19.615871   32280 command_runner.go:130] > #
	I1002 20:12:19.615880   32280 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 20:12:19.615889   32280 command_runner.go:130] > # feature.
	I1002 20:12:19.615894   32280 command_runner.go:130] > #
	I1002 20:12:19.615906   32280 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 20:12:19.615918   32280 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 20:12:19.615931   32280 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 20:12:19.615943   32280 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 20:12:19.615954   32280 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 20:12:19.615957   32280 command_runner.go:130] > #
	I1002 20:12:19.615964   32280 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 20:12:19.615972   32280 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 20:12:19.615977   32280 command_runner.go:130] > #
	I1002 20:12:19.615989   32280 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 20:12:19.616001   32280 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 20:12:19.616010   32280 command_runner.go:130] > #
	I1002 20:12:19.616019   32280 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 20:12:19.616031   32280 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 20:12:19.616039   32280 command_runner.go:130] > # limitation.
	I1002 20:12:19.616045   32280 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 20:12:19.616054   32280 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 20:12:19.616058   32280 command_runner.go:130] > runtime_type = ""
	I1002 20:12:19.616063   32280 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 20:12:19.616073   32280 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:12:19.616082   32280 command_runner.go:130] > runtime_config_path = ""
	I1002 20:12:19.616091   32280 command_runner.go:130] > container_min_memory = ""
	I1002 20:12:19.616098   32280 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:12:19.616107   32280 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:12:19.616115   32280 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:12:19.616124   32280 command_runner.go:130] > allowed_annotations = [
	I1002 20:12:19.616131   32280 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 20:12:19.616137   32280 command_runner.go:130] > ]
	I1002 20:12:19.616141   32280 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:12:19.616146   32280 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 20:12:19.616157   32280 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 20:12:19.616163   32280 command_runner.go:130] > runtime_type = ""
	I1002 20:12:19.616173   32280 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 20:12:19.616180   32280 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:12:19.616189   32280 command_runner.go:130] > runtime_config_path = ""
	I1002 20:12:19.616196   32280 command_runner.go:130] > container_min_memory = ""
	I1002 20:12:19.616206   32280 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:12:19.616215   32280 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:12:19.616221   32280 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:12:19.616228   32280 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:12:19.616238   32280 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 20:12:19.616247   32280 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 20:12:19.616258   32280 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 20:12:19.616272   32280 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 20:12:19.616289   32280 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 20:12:19.616305   32280 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 20:12:19.616314   32280 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 20:12:19.616323   32280 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 20:12:19.616340   32280 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 20:12:19.616353   32280 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 20:12:19.616366   32280 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 20:12:19.616380   32280 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 20:12:19.616387   32280 command_runner.go:130] > # Example:
	I1002 20:12:19.616393   32280 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 20:12:19.616401   32280 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 20:12:19.616408   32280 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 20:12:19.616420   32280 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 20:12:19.616430   32280 command_runner.go:130] > # cpuset = "0-1"
	I1002 20:12:19.616435   32280 command_runner.go:130] > # cpushares = "5"
	I1002 20:12:19.616442   32280 command_runner.go:130] > # cpuquota = "1000"
	I1002 20:12:19.616451   32280 command_runner.go:130] > # cpuperiod = "100000"
	I1002 20:12:19.616457   32280 command_runner.go:130] > # cpulimit = "35"
	I1002 20:12:19.616466   32280 command_runner.go:130] > # Where:
	I1002 20:12:19.616473   32280 command_runner.go:130] > # The workload name is workload-type.
	I1002 20:12:19.616483   32280 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 20:12:19.616489   32280 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 20:12:19.616502   32280 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 20:12:19.616516   32280 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 20:12:19.616528   32280 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 20:12:19.616541   32280 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 20:12:19.616551   32280 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 20:12:19.616560   32280 command_runner.go:130] > # Default value is set to true
	I1002 20:12:19.616566   32280 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 20:12:19.616574   32280 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 20:12:19.616582   32280 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 20:12:19.616592   32280 command_runner.go:130] > # Default value is set to 'false'
	I1002 20:12:19.616601   32280 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 20:12:19.616612   32280 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 20:12:19.616624   32280 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 20:12:19.616632   32280 command_runner.go:130] > # timezone = ""
	I1002 20:12:19.616642   32280 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 20:12:19.616658   32280 command_runner.go:130] > #
	I1002 20:12:19.616667   32280 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 20:12:19.616686   32280 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 20:12:19.616695   32280 command_runner.go:130] > [crio.image]
	I1002 20:12:19.616703   32280 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 20:12:19.616714   32280 command_runner.go:130] > # default_transport = "docker://"
	I1002 20:12:19.616725   32280 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 20:12:19.616732   32280 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:12:19.616739   32280 command_runner.go:130] > # global_auth_file = ""
	I1002 20:12:19.616751   32280 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 20:12:19.616762   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.616771   32280 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.616783   32280 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 20:12:19.616795   32280 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:12:19.616804   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.616811   32280 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 20:12:19.616817   32280 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 20:12:19.616825   32280 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 20:12:19.616830   32280 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 20:12:19.616837   32280 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 20:12:19.616842   32280 command_runner.go:130] > # pause_command = "/pause"
	I1002 20:12:19.616852   32280 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 20:12:19.616864   32280 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 20:12:19.616877   32280 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 20:12:19.616889   32280 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 20:12:19.616899   32280 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 20:12:19.616911   32280 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 20:12:19.616918   32280 command_runner.go:130] > # pinned_images = [
	I1002 20:12:19.616921   32280 command_runner.go:130] > # ]
	I1002 20:12:19.616928   32280 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 20:12:19.616937   32280 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 20:12:19.616942   32280 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 20:12:19.616947   32280 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 20:12:19.616955   32280 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 20:12:19.616959   32280 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 20:12:19.616965   32280 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 20:12:19.616973   32280 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 20:12:19.616979   32280 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 20:12:19.616988   32280 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 20:12:19.616997   32280 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 20:12:19.617009   32280 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 20:12:19.617020   32280 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 20:12:19.617036   32280 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 20:12:19.617044   32280 command_runner.go:130] > # changing them here.
	I1002 20:12:19.617053   32280 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 20:12:19.617062   32280 command_runner.go:130] > # insecure_registries = [
	I1002 20:12:19.617066   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617073   32280 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 20:12:19.617078   32280 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 20:12:19.617084   32280 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 20:12:19.617089   32280 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 20:12:19.617095   32280 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 20:12:19.617101   32280 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 20:12:19.617107   32280 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 20:12:19.617111   32280 command_runner.go:130] > # auto_reload_registries = false
	I1002 20:12:19.617117   32280 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 20:12:19.617127   32280 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 20:12:19.617135   32280 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 20:12:19.617138   32280 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 20:12:19.617143   32280 command_runner.go:130] > # The mode of short name resolution.
	I1002 20:12:19.617149   32280 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 20:12:19.617158   32280 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 20:12:19.617163   32280 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 20:12:19.617169   32280 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 20:12:19.617175   32280 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 20:12:19.617182   32280 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 20:12:19.617186   32280 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 20:12:19.617192   32280 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 20:12:19.617197   32280 command_runner.go:130] > # CNI plugins.
	I1002 20:12:19.617200   32280 command_runner.go:130] > [crio.network]
	I1002 20:12:19.617206   32280 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 20:12:19.617212   32280 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 20:12:19.617219   32280 command_runner.go:130] > # cni_default_network = ""
	I1002 20:12:19.617231   32280 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 20:12:19.617240   32280 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 20:12:19.617246   32280 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 20:12:19.617250   32280 command_runner.go:130] > # plugin_dirs = [
	I1002 20:12:19.617254   32280 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 20:12:19.617256   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617261   32280 command_runner.go:130] > # List of included pod metrics.
	I1002 20:12:19.617266   32280 command_runner.go:130] > # included_pod_metrics = [
	I1002 20:12:19.617269   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617274   32280 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 20:12:19.617279   32280 command_runner.go:130] > [crio.metrics]
	I1002 20:12:19.617284   32280 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 20:12:19.617290   32280 command_runner.go:130] > # enable_metrics = false
	I1002 20:12:19.617294   32280 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 20:12:19.617298   32280 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 20:12:19.617306   32280 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 20:12:19.617312   32280 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 20:12:19.617320   32280 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 20:12:19.617323   32280 command_runner.go:130] > # metrics_collectors = [
	I1002 20:12:19.617327   32280 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 20:12:19.617331   32280 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 20:12:19.617334   32280 command_runner.go:130] > # 	"containers_oom_total",
	I1002 20:12:19.617338   32280 command_runner.go:130] > # 	"processes_defunct",
	I1002 20:12:19.617341   32280 command_runner.go:130] > # 	"operations_total",
	I1002 20:12:19.617345   32280 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 20:12:19.617348   32280 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 20:12:19.617352   32280 command_runner.go:130] > # 	"operations_errors_total",
	I1002 20:12:19.617355   32280 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 20:12:19.617359   32280 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 20:12:19.617363   32280 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 20:12:19.617367   32280 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 20:12:19.617371   32280 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 20:12:19.617375   32280 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 20:12:19.617379   32280 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 20:12:19.617383   32280 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 20:12:19.617388   32280 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 20:12:19.617391   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617397   32280 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 20:12:19.617403   32280 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 20:12:19.617407   32280 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 20:12:19.617411   32280 command_runner.go:130] > # metrics_port = 9090
	I1002 20:12:19.617415   32280 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 20:12:19.617419   32280 command_runner.go:130] > # metrics_socket = ""
	I1002 20:12:19.617423   32280 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 20:12:19.617429   32280 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 20:12:19.617437   32280 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 20:12:19.617441   32280 command_runner.go:130] > # certificate on any modification event.
	I1002 20:12:19.617447   32280 command_runner.go:130] > # metrics_cert = ""
	I1002 20:12:19.617452   32280 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 20:12:19.617456   32280 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 20:12:19.617460   32280 command_runner.go:130] > # metrics_key = ""
	I1002 20:12:19.617465   32280 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 20:12:19.617471   32280 command_runner.go:130] > [crio.tracing]
	I1002 20:12:19.617476   32280 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 20:12:19.617482   32280 command_runner.go:130] > # enable_tracing = false
	I1002 20:12:19.617488   32280 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 20:12:19.617494   32280 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 20:12:19.617500   32280 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 20:12:19.617506   32280 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 20:12:19.617511   32280 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 20:12:19.617514   32280 command_runner.go:130] > [crio.nri]
	I1002 20:12:19.617518   32280 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 20:12:19.617524   32280 command_runner.go:130] > # enable_nri = true
	I1002 20:12:19.617527   32280 command_runner.go:130] > # NRI socket to listen on.
	I1002 20:12:19.617533   32280 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 20:12:19.617539   32280 command_runner.go:130] > # NRI plugin directory to use.
	I1002 20:12:19.617543   32280 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 20:12:19.617547   32280 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 20:12:19.617552   32280 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 20:12:19.617560   32280 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 20:12:19.617591   32280 command_runner.go:130] > # nri_disable_connections = false
	I1002 20:12:19.617598   32280 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 20:12:19.617604   32280 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 20:12:19.617612   32280 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 20:12:19.617623   32280 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 20:12:19.617630   32280 command_runner.go:130] > # NRI default validator configuration.
	I1002 20:12:19.617637   32280 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 20:12:19.617645   32280 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 20:12:19.617661   32280 command_runner.go:130] > # can be restricted/rejected:
	I1002 20:12:19.617671   32280 command_runner.go:130] > # - OCI hook injection
	I1002 20:12:19.617683   32280 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 20:12:19.617691   32280 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 20:12:19.617696   32280 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 20:12:19.617702   32280 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 20:12:19.617708   32280 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 20:12:19.617715   32280 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 20:12:19.617720   32280 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 20:12:19.617722   32280 command_runner.go:130] > #
	I1002 20:12:19.617726   32280 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 20:12:19.617733   32280 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 20:12:19.617737   32280 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 20:12:19.617743   32280 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 20:12:19.617750   32280 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 20:12:19.617755   32280 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 20:12:19.617759   32280 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 20:12:19.617764   32280 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 20:12:19.617767   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617771   32280 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 20:12:19.617779   32280 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 20:12:19.617782   32280 command_runner.go:130] > [crio.stats]
	I1002 20:12:19.617787   32280 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 20:12:19.617796   32280 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 20:12:19.617800   32280 command_runner.go:130] > # stats_collection_period = 0
	I1002 20:12:19.617807   32280 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 20:12:19.617812   32280 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 20:12:19.617819   32280 command_runner.go:130] > # collection_period = 0
	I1002 20:12:19.617847   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597735388Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 20:12:19.617857   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597762161Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 20:12:19.617879   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597788561Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 20:12:19.617891   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597814431Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 20:12:19.617901   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597905829Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:19.617910   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.59812179Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 20:12:19.617937   32280 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 20:12:19.618034   32280 cni.go:84] Creating CNI manager for ""
	I1002 20:12:19.618045   32280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:12:19.618055   32280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:12:19.618074   32280 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753218 NodeName:functional-753218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:12:19.618185   32280 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:12:19.618237   32280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:12:19.625483   32280 command_runner.go:130] > kubeadm
	I1002 20:12:19.625499   32280 command_runner.go:130] > kubectl
	I1002 20:12:19.625503   32280 command_runner.go:130] > kubelet
	I1002 20:12:19.626080   32280 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:12:19.626131   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:12:19.633273   32280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:12:19.644695   32280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:12:19.656113   32280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:12:19.667414   32280 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:12:19.670740   32280 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 20:12:19.670794   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:19.752159   32280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:12:19.764280   32280 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218 for IP: 192.168.49.2
	I1002 20:12:19.764303   32280 certs.go:195] generating shared ca certs ...
	I1002 20:12:19.764324   32280 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:19.764461   32280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:12:19.764507   32280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:12:19.764516   32280 certs.go:257] generating profile certs ...
	I1002 20:12:19.764596   32280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key
	I1002 20:12:19.764641   32280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804
	I1002 20:12:19.764700   32280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key
	I1002 20:12:19.764711   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:12:19.764723   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:12:19.764735   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:12:19.764749   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:12:19.764761   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:12:19.764773   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:12:19.764785   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:12:19.764797   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:12:19.764840   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:12:19.764868   32280 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:12:19.764878   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:12:19.764907   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:12:19.764932   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:12:19.764953   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:12:19.764991   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:12:19.765016   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:19.765029   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.765042   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:12:19.765474   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:12:19.782548   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:12:19.799734   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:12:19.816390   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:12:19.832589   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:12:19.848700   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:12:19.864849   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:12:19.880775   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:12:19.896846   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:12:19.913614   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:12:19.929578   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:12:19.945677   32280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:12:19.957745   32280 ssh_runner.go:195] Run: openssl version
	I1002 20:12:19.963258   32280 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 20:12:19.963501   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:12:19.971695   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975234   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975257   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975294   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:12:20.009021   32280 command_runner.go:130] > 51391683
	I1002 20:12:20.009100   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:12:20.016966   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:12:20.025422   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029194   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029238   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029282   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.064218   32280 command_runner.go:130] > 3ec20f2e
	I1002 20:12:20.064321   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:12:20.072502   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:12:20.080739   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084507   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084542   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084576   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.118973   32280 command_runner.go:130] > b5213941
	I1002 20:12:20.119045   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:12:20.127219   32280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:12:20.130733   32280 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:12:20.130756   32280 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 20:12:20.130765   32280 command_runner.go:130] > Device: 8,1	Inode: 579408      Links: 1
	I1002 20:12:20.130774   32280 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:12:20.130783   32280 command_runner.go:130] > Access: 2025-10-02 20:08:10.644972655 +0000
	I1002 20:12:20.130793   32280 command_runner.go:130] > Modify: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130799   32280 command_runner.go:130] > Change: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130806   32280 command_runner.go:130] >  Birth: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130872   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:12:20.164340   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.164601   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:12:20.199434   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.199512   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:12:20.233489   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.233589   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:12:20.266980   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.267235   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:12:20.300792   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.301105   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:12:20.334621   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.334895   32280 kubeadm.go:400] StartCluster: {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:20.334978   32280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:12:20.335040   32280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:12:20.362233   32280 cri.go:89] found id: ""
	I1002 20:12:20.362287   32280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:12:20.370000   32280 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 20:12:20.370022   32280 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 20:12:20.370028   32280 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 20:12:20.370045   32280 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:12:20.370050   32280 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:12:20.370092   32280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:12:20.377231   32280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:12:20.377306   32280 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753218" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.377343   32280 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753218" cluster setting kubeconfig missing "functional-753218" context setting]
	I1002 20:12:20.377618   32280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.379016   32280 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.379143   32280 kapi.go:59] client config for functional-753218: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:12:20.379525   32280 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:12:20.379543   32280 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:12:20.379548   32280 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:12:20.379552   32280 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:12:20.379556   32280 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:12:20.379580   32280 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:12:20.379896   32280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:12:20.387047   32280 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:12:20.387086   32280 kubeadm.go:601] duration metric: took 17.030465ms to restartPrimaryControlPlane
	I1002 20:12:20.387097   32280 kubeadm.go:402] duration metric: took 52.210982ms to StartCluster
	I1002 20:12:20.387113   32280 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.387221   32280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.387762   32280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.387978   32280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:12:20.388069   32280 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:12:20.388123   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:20.388170   32280 addons.go:69] Setting storage-provisioner=true in profile "functional-753218"
	I1002 20:12:20.388189   32280 addons.go:238] Setting addon storage-provisioner=true in "functional-753218"
	I1002 20:12:20.388224   32280 host.go:66] Checking if "functional-753218" exists ...
	I1002 20:12:20.388188   32280 addons.go:69] Setting default-storageclass=true in profile "functional-753218"
	I1002 20:12:20.388303   32280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753218"
	I1002 20:12:20.388534   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.388593   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.390858   32280 out.go:179] * Verifying Kubernetes components...
	I1002 20:12:20.392041   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:20.408831   32280 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.409013   32280 kapi.go:59] client config for functional-753218: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:12:20.409334   32280 addons.go:238] Setting addon default-storageclass=true in "functional-753218"
	I1002 20:12:20.409372   32280 host.go:66] Checking if "functional-753218" exists ...
	I1002 20:12:20.409857   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.409921   32280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:12:20.411389   32280 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:20.411408   32280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:12:20.411451   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:20.434249   32280 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.434269   32280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:12:20.434323   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:20.437366   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:20.453124   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:20.491163   32280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:12:20.504681   32280 node_ready.go:35] waiting up to 6m0s for node "functional-753218" to be "Ready" ...
	I1002 20:12:20.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:12:20.504901   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:20.505187   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:20.544925   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:20.560749   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.598254   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.598305   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.598334   32280 retry.go:31] will retry after 360.790251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.611750   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.611829   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.611854   32280 retry.go:31] will retry after 210.270105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.822270   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.872283   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.874485   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.874514   32280 retry.go:31] will retry after 244.966298ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.959846   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.005341   32280 type.go:168] "Request Body" body=""
	I1002 20:12:21.005421   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:21.005781   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:21.012418   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.012451   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.012466   32280 retry.go:31] will retry after 409.292121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.119728   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:21.168429   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.170739   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.170771   32280 retry.go:31] will retry after 294.217693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.422106   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.465688   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:21.470239   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.472502   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.472537   32280 retry.go:31] will retry after 332.995728ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.505685   32280 type.go:168] "Request Body" body=""
	I1002 20:12:21.505778   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:21.506123   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:21.516911   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.516971   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.516996   32280 retry.go:31] will retry after 954.810325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.806393   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.857573   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.857614   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.857637   32280 retry.go:31] will retry after 1.033500231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.004877   32280 type.go:168] "Request Body" body=""
	I1002 20:12:22.004976   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:22.005310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:22.472906   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:22.505435   32280 type.go:168] "Request Body" body=""
	I1002 20:12:22.505517   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:22.505893   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:22.505957   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:22.524411   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:22.524454   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.524474   32280 retry.go:31] will retry after 931.915639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.892005   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:22.942851   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:22.942928   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.942955   32280 retry.go:31] will retry after 1.834952264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.004927   32280 type.go:168] "Request Body" body=""
	I1002 20:12:23.005007   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:23.005354   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:23.456821   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:23.505094   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.505147   32280 type.go:168] "Request Body" body=""
	I1002 20:12:23.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:23.505484   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:23.507597   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.507626   32280 retry.go:31] will retry after 2.313716894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:24.005157   32280 type.go:168] "Request Body" body=""
	I1002 20:12:24.005267   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:24.005594   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:24.505508   32280 type.go:168] "Request Body" body=""
	I1002 20:12:24.505632   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:24.506012   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:24.506092   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:24.778419   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:24.830315   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:24.830361   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:24.830382   32280 retry.go:31] will retry after 2.530323246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:25.005736   32280 type.go:168] "Request Body" body=""
	I1002 20:12:25.005808   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:25.006117   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:25.504853   32280 type.go:168] "Request Body" body=""
	I1002 20:12:25.504920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:25.505273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:25.821714   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:25.872812   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:25.872859   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:25.872881   32280 retry.go:31] will retry after 1.957365536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:26.005078   32280 type.go:168] "Request Body" body=""
	I1002 20:12:26.005153   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:26.005503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:26.505250   32280 type.go:168] "Request Body" body=""
	I1002 20:12:26.505323   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:26.505711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:27.005530   32280 type.go:168] "Request Body" body=""
	I1002 20:12:27.005599   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:27.005959   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:27.006023   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:27.361473   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:27.411520   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:27.413776   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.413807   32280 retry.go:31] will retry after 3.768585845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.504922   32280 type.go:168] "Request Body" body=""
	I1002 20:12:27.505019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:27.505351   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:27.830904   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:27.880071   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:27.882324   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.882350   32280 retry.go:31] will retry after 2.676983733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:28.005719   32280 type.go:168] "Request Body" body=""
	I1002 20:12:28.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:28.006101   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:28.504826   32280 type.go:168] "Request Body" body=""
	I1002 20:12:28.504909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:28.505226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:29.004968   32280 type.go:168] "Request Body" body=""
	I1002 20:12:29.005052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:29.005361   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:29.505178   32280 type.go:168] "Request Body" body=""
	I1002 20:12:29.505270   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:29.505576   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:29.505628   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:30.005335   32280 type.go:168] "Request Body" body=""
	I1002 20:12:30.005400   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:30.005747   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:30.505557   32280 type.go:168] "Request Body" body=""
	I1002 20:12:30.505643   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:30.505971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:30.560186   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:30.610807   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:30.610870   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:30.610892   32280 retry.go:31] will retry after 7.973230912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.005274   32280 type.go:168] "Request Body" body=""
	I1002 20:12:31.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:31.005789   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:31.182990   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:31.231953   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:31.234462   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.234491   32280 retry.go:31] will retry after 5.687657455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.504842   32280 type.go:168] "Request Body" body=""
	I1002 20:12:31.504956   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:31.505254   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:32.005885   32280 type.go:168] "Request Body" body=""
	I1002 20:12:32.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:32.006262   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:32.006314   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:32.504840   32280 type.go:168] "Request Body" body=""
	I1002 20:12:32.504910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:32.505216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:33.005827   32280 type.go:168] "Request Body" body=""
	I1002 20:12:33.005903   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:33.006210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:33.505861   32280 type.go:168] "Request Body" body=""
	I1002 20:12:33.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:33.506234   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:34.005834   32280 type.go:168] "Request Body" body=""
	I1002 20:12:34.005939   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:34.006292   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:34.006347   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:34.505067   32280 type.go:168] "Request Body" body=""
	I1002 20:12:34.505178   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:34.505476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:35.005027   32280 type.go:168] "Request Body" body=""
	I1002 20:12:35.005102   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:35.005423   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:35.504956   32280 type.go:168] "Request Body" body=""
	I1002 20:12:35.505018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:35.505338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:36.004897   32280 type.go:168] "Request Body" body=""
	I1002 20:12:36.005010   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:36.005325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:36.504908   32280 type.go:168] "Request Body" body=""
	I1002 20:12:36.504975   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:36.505273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:36.505325   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:36.922844   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:36.972691   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:36.975093   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:36.975120   32280 retry.go:31] will retry after 6.057609391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:37.005334   32280 type.go:168] "Request Body" body=""
	I1002 20:12:37.005422   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:37.005758   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:37.505360   32280 type.go:168] "Request Body" body=""
	I1002 20:12:37.505473   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:37.505826   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:38.005595   32280 type.go:168] "Request Body" body=""
	I1002 20:12:38.005685   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:38.005995   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:38.505731   32280 type.go:168] "Request Body" body=""
	I1002 20:12:38.505833   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:38.506204   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:38.506258   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:38.584343   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:38.634498   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:38.634541   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:38.634559   32280 retry.go:31] will retry after 11.473349324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:39.004966   32280 type.go:168] "Request Body" body=""
	I1002 20:12:39.005047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:39.005329   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:39.505287   32280 type.go:168] "Request Body" body=""
	I1002 20:12:39.505349   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:39.505690   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:40.005217   32280 type.go:168] "Request Body" body=""
	I1002 20:12:40.005283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:40.005689   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:40.505522   32280 type.go:168] "Request Body" body=""
	I1002 20:12:40.505586   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:40.505931   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:41.005519   32280 type.go:168] "Request Body" body=""
	I1002 20:12:41.005620   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:41.005984   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:41.006049   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:41.505595   32280 type.go:168] "Request Body" body=""
	I1002 20:12:41.505678   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:41.506021   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:42.005588   32280 type.go:168] "Request Body" body=""
	I1002 20:12:42.005666   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:42.005990   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:42.505580   32280 type.go:168] "Request Body" body=""
	I1002 20:12:42.505660   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:42.506010   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:43.005624   32280 type.go:168] "Request Body" body=""
	I1002 20:12:43.005704   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:43.006025   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:43.006077   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:43.033216   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:43.084626   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:43.084680   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:43.084700   32280 retry.go:31] will retry after 13.696949746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:43.504971   32280 type.go:168] "Request Body" body=""
	I1002 20:12:43.505052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:43.505379   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:44.004912   32280 type.go:168] "Request Body" body=""
	I1002 20:12:44.004988   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:44.005285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:44.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:12:44.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:44.505402   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:45.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:12:45.005026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:45.005321   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:45.504904   32280 type.go:168] "Request Body" body=""
	I1002 20:12:45.504997   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:45.505300   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:45.505354   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:46.004960   32280 type.go:168] "Request Body" body=""
	I1002 20:12:46.005023   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:46.005350   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:46.504882   32280 type.go:168] "Request Body" body=""
	I1002 20:12:46.505005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:46.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:47.004909   32280 type.go:168] "Request Body" body=""
	I1002 20:12:47.004973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:47.005265   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:47.505882   32280 type.go:168] "Request Body" body=""
	I1002 20:12:47.506000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:47.506320   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:47.506400   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:48.004928   32280 type.go:168] "Request Body" body=""
	I1002 20:12:48.005004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:48.005305   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:48.504865   32280 type.go:168] "Request Body" body=""
	I1002 20:12:48.504959   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:48.505270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:49.004954   32280 type.go:168] "Request Body" body=""
	I1002 20:12:49.005020   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:49.005323   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:49.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:12:49.505108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:49.505418   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:50.004957   32280 type.go:168] "Request Body" body=""
	I1002 20:12:50.005023   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:50.005336   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:50.005399   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:50.108603   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:50.158622   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:50.158675   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:50.158705   32280 retry.go:31] will retry after 7.866512619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:50.505487   32280 type.go:168] "Request Body" body=""
	I1002 20:12:50.505555   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:50.505903   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:51.005559   32280 type.go:168] "Request Body" body=""
	I1002 20:12:51.005635   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:51.005990   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:51.505707   32280 type.go:168] "Request Body" body=""
	I1002 20:12:51.505791   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:51.506153   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:52.005777   32280 type.go:168] "Request Body" body=""
	I1002 20:12:52.005901   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:52.006225   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:52.006281   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:52.504874   32280 type.go:168] "Request Body" body=""
	I1002 20:12:52.504935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:52.505268   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:53.005873   32280 type.go:168] "Request Body" body=""
	I1002 20:12:53.005951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:53.006260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:53.504881   32280 type.go:168] "Request Body" body=""
	I1002 20:12:53.505006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:53.505318   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:54.004965   32280 type.go:168] "Request Body" body=""
	I1002 20:12:54.005040   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:54.005355   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:54.505336   32280 type.go:168] "Request Body" body=""
	I1002 20:12:54.505429   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:54.505803   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:54.505860   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:55.005500   32280 type.go:168] "Request Body" body=""
	I1002 20:12:55.005582   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:55.005971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:55.505630   32280 type.go:168] "Request Body" body=""
	I1002 20:12:55.505727   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:55.506074   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:56.005749   32280 type.go:168] "Request Body" body=""
	I1002 20:12:56.005828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:56.006175   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:56.505817   32280 type.go:168] "Request Body" body=""
	I1002 20:12:56.505912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:56.506244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:56.506305   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:56.782639   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:56.831722   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:56.833971   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:56.834005   32280 retry.go:31] will retry after 8.803585786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:57.005357   32280 type.go:168] "Request Body" body=""
	I1002 20:12:57.005440   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:57.005756   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:57.505340   32280 type.go:168] "Request Body" body=""
	I1002 20:12:57.505420   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:57.505751   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:58.005333   32280 type.go:168] "Request Body" body=""
	I1002 20:12:58.005402   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:58.005752   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:58.025966   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:58.074036   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:58.076335   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:58.076367   32280 retry.go:31] will retry after 21.837732561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:58.504884   32280 type.go:168] "Request Body" body=""
	I1002 20:12:58.504952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:58.505269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:59.005019   32280 type.go:168] "Request Body" body=""
	I1002 20:12:59.005088   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:59.005416   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:59.005476   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:59.505294   32280 type.go:168] "Request Body" body=""
	I1002 20:12:59.505371   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:59.505719   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:00.005587   32280 type.go:168] "Request Body" body=""
	I1002 20:13:00.005681   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:00.006070   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:00.505895   32280 type.go:168] "Request Body" body=""
	I1002 20:13:00.505970   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:00.506282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:01.005032   32280 type.go:168] "Request Body" body=""
	I1002 20:13:01.005101   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:01.005454   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:01.005507   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:01.505230   32280 type.go:168] "Request Body" body=""
	I1002 20:13:01.505332   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:01.505713   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:02.005565   32280 type.go:168] "Request Body" body=""
	I1002 20:13:02.005638   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:02.005989   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:02.505747   32280 type.go:168] "Request Body" body=""
	I1002 20:13:02.505834   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:02.506161   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:03.004921   32280 type.go:168] "Request Body" body=""
	I1002 20:13:03.004999   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:03.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:03.505030   32280 type.go:168] "Request Body" body=""
	I1002 20:13:03.505163   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:03.505496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:03.505553   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:04.005013   32280 type.go:168] "Request Body" body=""
	I1002 20:13:04.005102   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:04.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:04.505235   32280 type.go:168] "Request Body" body=""
	I1002 20:13:04.505310   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:04.505603   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:05.005373   32280 type.go:168] "Request Body" body=""
	I1002 20:13:05.005436   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:05.005779   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:05.505626   32280 type.go:168] "Request Body" body=""
	I1002 20:13:05.505713   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:05.506017   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:05.506071   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:05.638454   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:05.690182   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:05.690237   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:05.690256   32280 retry.go:31] will retry after 17.824989731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:06.005701   32280 type.go:168] "Request Body" body=""
	I1002 20:13:06.005799   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:06.006119   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:06.504842   32280 type.go:168] "Request Body" body=""
	I1002 20:13:06.504914   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:06.505251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:07.005004   32280 type.go:168] "Request Body" body=""
	I1002 20:13:07.005108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:07.005436   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:07.505210   32280 type.go:168] "Request Body" body=""
	I1002 20:13:07.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:07.505609   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:08.005363   32280 type.go:168] "Request Body" body=""
	I1002 20:13:08.005446   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:08.005783   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:08.005845   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:08.505633   32280 type.go:168] "Request Body" body=""
	I1002 20:13:08.505725   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:08.506087   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:09.004810   32280 type.go:168] "Request Body" body=""
	I1002 20:13:09.004939   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:09.005246   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:09.505036   32280 type.go:168] "Request Body" body=""
	I1002 20:13:09.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:09.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:10.005227   32280 type.go:168] "Request Body" body=""
	I1002 20:13:10.005294   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:10.005624   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:10.505218   32280 type.go:168] "Request Body" body=""
	I1002 20:13:10.505284   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:10.505609   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:10.505692   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:11.005490   32280 type.go:168] "Request Body" body=""
	I1002 20:13:11.005558   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:11.005879   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:11.505739   32280 type.go:168] "Request Body" body=""
	I1002 20:13:11.505817   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:11.506182   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:12.004937   32280 type.go:168] "Request Body" body=""
	I1002 20:13:12.005026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:12.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:12.505102   32280 type.go:168] "Request Body" body=""
	I1002 20:13:12.505168   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:12.505509   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:13.005242   32280 type.go:168] "Request Body" body=""
	I1002 20:13:13.005316   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:13.005692   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:13.005741   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:13.505519   32280 type.go:168] "Request Body" body=""
	I1002 20:13:13.505584   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:13.505958   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:14.005767   32280 type.go:168] "Request Body" body=""
	I1002 20:13:14.005841   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:14.006164   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:14.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:13:14.505069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:14.505397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:15.005101   32280 type.go:168] "Request Body" body=""
	I1002 20:13:15.005189   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:15.005569   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:15.505328   32280 type.go:168] "Request Body" body=""
	I1002 20:13:15.505404   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:15.505799   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:15.505864   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:16.005581   32280 type.go:168] "Request Body" body=""
	I1002 20:13:16.005659   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:16.006015   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:16.505815   32280 type.go:168] "Request Body" body=""
	I1002 20:13:16.505909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:16.506240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:17.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:13:17.004989   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:17.005317   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:17.505042   32280 type.go:168] "Request Body" body=""
	I1002 20:13:17.505108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:17.505466   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:18.005185   32280 type.go:168] "Request Body" body=""
	I1002 20:13:18.005248   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:18.005594   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:18.005675   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:18.505365   32280 type.go:168] "Request Body" body=""
	I1002 20:13:18.505431   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:18.505829   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.005617   32280 type.go:168] "Request Body" body=""
	I1002 20:13:19.005703   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:19.006054   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.505860   32280 type.go:168] "Request Body" body=""
	I1002 20:13:19.505925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:19.506274   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.914795   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:13:19.964946   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:19.964982   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:19.964998   32280 retry.go:31] will retry after 37.877741779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:20.005163   32280 type.go:168] "Request Body" body=""
	I1002 20:13:20.005260   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:20.005579   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:20.505603   32280 type.go:168] "Request Body" body=""
	I1002 20:13:20.505696   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:20.506040   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:20.506105   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:21.005687   32280 type.go:168] "Request Body" body=""
	I1002 20:13:21.005752   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:21.006074   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:21.505754   32280 type.go:168] "Request Body" body=""
	I1002 20:13:21.505828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:21.506211   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:22.005841   32280 type.go:168] "Request Body" body=""
	I1002 20:13:22.005906   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:22.006231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:22.505901   32280 type.go:168] "Request Body" body=""
	I1002 20:13:22.506010   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:22.506365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:22.506463   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:23.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:13:23.005035   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:23.005390   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:23.504963   32280 type.go:168] "Request Body" body=""
	I1002 20:13:23.505048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:23.505365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:23.515608   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:23.566822   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:23.566879   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:23.566903   32280 retry.go:31] will retry after 23.13190401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:24.005366   32280 type.go:168] "Request Body" body=""
	I1002 20:13:24.005433   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:24.005789   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:24.505700   32280 type.go:168] "Request Body" body=""
	I1002 20:13:24.505774   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:24.506172   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:25.005817   32280 type.go:168] "Request Body" body=""
	I1002 20:13:25.005885   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:25.006218   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:25.006274   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:25.505892   32280 type.go:168] "Request Body" body=""
	I1002 20:13:25.505960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:25.506325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:26.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:13:26.005093   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:26.005420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:26.505011   32280 type.go:168] "Request Body" body=""
	I1002 20:13:26.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:26.505477   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:27.005016   32280 type.go:168] "Request Body" body=""
	I1002 20:13:27.005085   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:27.005415   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:27.505000   32280 type.go:168] "Request Body" body=""
	I1002 20:13:27.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:27.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:27.505471   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:28.004995   32280 type.go:168] "Request Body" body=""
	I1002 20:13:28.005059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:28.005387   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:28.504979   32280 type.go:168] "Request Body" body=""
	I1002 20:13:28.505063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:28.505426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:29.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:13:29.005070   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:29.005385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:29.505292   32280 type.go:168] "Request Body" body=""
	I1002 20:13:29.505364   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:29.505745   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:29.505830   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:30.005263   32280 type.go:168] "Request Body" body=""
	I1002 20:13:30.005354   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:30.005711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:30.505565   32280 type.go:168] "Request Body" body=""
	I1002 20:13:30.505630   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:30.505975   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:31.005629   32280 type.go:168] "Request Body" body=""
	I1002 20:13:31.005725   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:31.006066   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:31.505717   32280 type.go:168] "Request Body" body=""
	I1002 20:13:31.505806   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:31.506146   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:31.506205   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:32.005772   32280 type.go:168] "Request Body" body=""
	I1002 20:13:32.005834   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:32.006141   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:32.505757   32280 type.go:168] "Request Body" body=""
	I1002 20:13:32.505827   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:32.506202   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:33.005813   32280 type.go:168] "Request Body" body=""
	I1002 20:13:33.005879   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:33.006207   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:33.505854   32280 type.go:168] "Request Body" body=""
	I1002 20:13:33.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:33.506299   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:33.506364   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:34.004865   32280 type.go:168] "Request Body" body=""
	I1002 20:13:34.004937   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:34.005277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:34.505059   32280 type.go:168] "Request Body" body=""
	I1002 20:13:34.505145   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:34.505557   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:35.005136   32280 type.go:168] "Request Body" body=""
	I1002 20:13:35.005210   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:35.005522   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:35.505130   32280 type.go:168] "Request Body" body=""
	I1002 20:13:35.505200   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:35.505574   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:36.005135   32280 type.go:168] "Request Body" body=""
	I1002 20:13:36.005205   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:36.005539   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:36.005593   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:36.505113   32280 type.go:168] "Request Body" body=""
	I1002 20:13:36.505181   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:36.505623   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:37.005206   32280 type.go:168] "Request Body" body=""
	I1002 20:13:37.005280   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:37.005599   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:37.505187   32280 type.go:168] "Request Body" body=""
	I1002 20:13:37.505253   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:37.505612   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:38.005212   32280 type.go:168] "Request Body" body=""
	I1002 20:13:38.005309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:38.005632   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:38.005716   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:38.505228   32280 type.go:168] "Request Body" body=""
	I1002 20:13:38.505309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:38.505743   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:39.005283   32280 type.go:168] "Request Body" body=""
	I1002 20:13:39.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:39.005688   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:39.505535   32280 type.go:168] "Request Body" body=""
	I1002 20:13:39.505601   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:39.505971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:40.005741   32280 type.go:168] "Request Body" body=""
	I1002 20:13:40.005811   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:40.006142   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:40.006200   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:40.504914   32280 type.go:168] "Request Body" body=""
	I1002 20:13:40.504981   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:40.505341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:41.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:13:41.004987   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:41.005308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:41.504899   32280 type.go:168] "Request Body" body=""
	I1002 20:13:41.504961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:41.505269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:42.004835   32280 type.go:168] "Request Body" body=""
	I1002 20:13:42.004910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:42.005229   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:42.504815   32280 type.go:168] "Request Body" body=""
	I1002 20:13:42.504896   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:42.505252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:42.505312   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:43.004917   32280 type.go:168] "Request Body" body=""
	I1002 20:13:43.004992   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:43.005315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:43.504906   32280 type.go:168] "Request Body" body=""
	I1002 20:13:43.504998   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:43.505371   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:44.004978   32280 type.go:168] "Request Body" body=""
	I1002 20:13:44.005041   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:44.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:44.505515   32280 type.go:168] "Request Body" body=""
	I1002 20:13:44.505582   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:44.505949   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:44.505999   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:45.005614   32280 type.go:168] "Request Body" body=""
	I1002 20:13:45.005720   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:45.006047   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:45.505675   32280 type.go:168] "Request Body" body=""
	I1002 20:13:45.505766   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:45.506082   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:46.005784   32280 type.go:168] "Request Body" body=""
	I1002 20:13:46.005862   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:46.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:46.505803   32280 type.go:168] "Request Body" body=""
	I1002 20:13:46.505894   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:46.506217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:46.506269   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:46.699644   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:46.747344   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:46.749844   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:46.749973   32280 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:13:47.005313   32280 type.go:168] "Request Body" body=""
	I1002 20:13:47.005446   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:47.005788   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:47.505665   32280 type.go:168] "Request Body" body=""
	I1002 20:13:47.505730   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:47.506069   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:48.005897   32280 type.go:168] "Request Body" body=""
	I1002 20:13:48.005960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:48.006265   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:48.505011   32280 type.go:168] "Request Body" body=""
	I1002 20:13:48.505103   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:48.505428   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:49.005178   32280 type.go:168] "Request Body" body=""
	I1002 20:13:49.005244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:49.005588   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:49.005688   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:49.505357   32280 type.go:168] "Request Body" body=""
	I1002 20:13:49.505439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:49.505750   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:50.005608   32280 type.go:168] "Request Body" body=""
	I1002 20:13:50.005698   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:50.006038   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:50.504871   32280 type.go:168] "Request Body" body=""
	I1002 20:13:50.504975   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:50.505342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:51.005115   32280 type.go:168] "Request Body" body=""
	I1002 20:13:51.005179   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:51.005488   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:51.505213   32280 type.go:168] "Request Body" body=""
	I1002 20:13:51.505301   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:51.505613   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:51.505717   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:52.005522   32280 type.go:168] "Request Body" body=""
	I1002 20:13:52.005612   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:52.005939   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:52.505742   32280 type.go:168] "Request Body" body=""
	I1002 20:13:52.505819   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:52.506150   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:53.004884   32280 type.go:168] "Request Body" body=""
	I1002 20:13:53.004954   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:53.005266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:53.505024   32280 type.go:168] "Request Body" body=""
	I1002 20:13:53.505129   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:53.505472   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:54.005163   32280 type.go:168] "Request Body" body=""
	I1002 20:13:54.005247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:54.005550   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:54.005630   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:54.505374   32280 type.go:168] "Request Body" body=""
	I1002 20:13:54.505439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:54.505844   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:55.005681   32280 type.go:168] "Request Body" body=""
	I1002 20:13:55.005746   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:55.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:55.504859   32280 type.go:168] "Request Body" body=""
	I1002 20:13:55.504950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:55.505290   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:56.004973   32280 type.go:168] "Request Body" body=""
	I1002 20:13:56.005052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:56.005360   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:56.505092   32280 type.go:168] "Request Body" body=""
	I1002 20:13:56.505157   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:56.505484   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:56.505543   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:57.005232   32280 type.go:168] "Request Body" body=""
	I1002 20:13:57.005319   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:57.005627   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:57.505479   32280 type.go:168] "Request Body" body=""
	I1002 20:13:57.505542   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:57.505874   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:57.843521   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:13:57.893953   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:57.894023   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:57.894118   32280 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:13:57.896474   32280 out.go:179] * Enabled addons: 
	I1002 20:13:57.898063   32280 addons.go:514] duration metric: took 1m37.510002204s for enable addons: enabled=[]
	I1002 20:13:58.005248   32280 type.go:168] "Request Body" body=""
	I1002 20:13:58.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:58.005671   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:58.505487   32280 type.go:168] "Request Body" body=""
	I1002 20:13:58.505565   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:58.505958   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:58.506014   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:59.005771   32280 type.go:168] "Request Body" body=""
	I1002 20:13:59.005876   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:59.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:59.504962   32280 type.go:168] "Request Body" body=""
	I1002 20:13:59.505047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:59.505359   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:00.005006   32280 type.go:168] "Request Body" body=""
	I1002 20:14:00.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:00.005392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:00.505111   32280 type.go:168] "Request Body" body=""
	I1002 20:14:00.505199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:00.505503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:01.005227   32280 type.go:168] "Request Body" body=""
	I1002 20:14:01.005326   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:01.005717   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:01.005789   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:01.505598   32280 type.go:168] "Request Body" body=""
	I1002 20:14:01.505687   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:01.506000   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:02.005861   32280 type.go:168] "Request Body" body=""
	I1002 20:14:02.005935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:02.006338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:02.504980   32280 type.go:168] "Request Body" body=""
	I1002 20:14:02.505043   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:02.505444   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:03.005199   32280 type.go:168] "Request Body" body=""
	I1002 20:14:03.005295   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:03.005617   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:03.505417   32280 type.go:168] "Request Body" body=""
	I1002 20:14:03.505500   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:03.505831   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:03.505910   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:04.005688   32280 type.go:168] "Request Body" body=""
	I1002 20:14:04.005768   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:04.006079   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:04.505822   32280 type.go:168] "Request Body" body=""
	I1002 20:14:04.505929   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:04.506212   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:05.004939   32280 type.go:168] "Request Body" body=""
	I1002 20:14:05.005032   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:05.005365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:05.505085   32280 type.go:168] "Request Body" body=""
	I1002 20:14:05.505167   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:05.505489   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:06.005229   32280 type.go:168] "Request Body" body=""
	I1002 20:14:06.005293   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:06.005679   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:06.005733   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:06.505561   32280 type.go:168] "Request Body" body=""
	I1002 20:14:06.505662   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:06.505997   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:07.005758   32280 type.go:168] "Request Body" body=""
	I1002 20:14:07.005865   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:07.006186   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:07.504924   32280 type.go:168] "Request Body" body=""
	I1002 20:14:07.504999   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:07.505319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:08.005020   32280 type.go:168] "Request Body" body=""
	I1002 20:14:08.005110   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:08.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:08.505144   32280 type.go:168] "Request Body" body=""
	I1002 20:14:08.505221   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:08.505546   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:08.505597   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:09.005324   32280 type.go:168] "Request Body" body=""
	I1002 20:14:09.005388   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:09.005759   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:09.505663   32280 type.go:168] "Request Body" body=""
	I1002 20:14:09.505738   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:09.506059   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:10.004831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:10.004913   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:10.005285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:10.504951   32280 type.go:168] "Request Body" body=""
	I1002 20:14:10.505047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:10.505396   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:11.005158   32280 type.go:168] "Request Body" body=""
	I1002 20:14:11.005275   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:11.005733   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:11.005797   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:11.505549   32280 type.go:168] "Request Body" body=""
	I1002 20:14:11.505697   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:11.506073   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:12.005903   32280 type.go:168] "Request Body" body=""
	I1002 20:14:12.005966   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:12.006268   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:12.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:14:12.505086   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:12.505427   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:13.004849   32280 type.go:168] "Request Body" body=""
	I1002 20:14:13.004968   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:13.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:13.505032   32280 type.go:168] "Request Body" body=""
	I1002 20:14:13.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:13.505438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:13.505493   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:14.005138   32280 type.go:168] "Request Body" body=""
	I1002 20:14:14.005202   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:14.005533   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:14.505306   32280 type.go:168] "Request Body" body=""
	I1002 20:14:14.505402   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:14.505762   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:15.005543   32280 type.go:168] "Request Body" body=""
	I1002 20:14:15.005604   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:15.005962   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:15.505741   32280 type.go:168] "Request Body" body=""
	I1002 20:14:15.505841   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:15.506168   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:15.506245   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:16.005122   32280 type.go:168] "Request Body" body=""
	I1002 20:14:16.005232   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:16.005696   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:16.504984   32280 type.go:168] "Request Body" body=""
	I1002 20:14:16.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:16.505370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:17.004896   32280 type.go:168] "Request Body" body=""
	I1002 20:14:17.004982   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:17.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:17.504836   32280 type.go:168] "Request Body" body=""
	I1002 20:14:17.504907   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:17.505220   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:18.005868   32280 type.go:168] "Request Body" body=""
	I1002 20:14:18.005951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:18.006358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:18.006423   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:18.504940   32280 type.go:168] "Request Body" body=""
	I1002 20:14:18.505026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:18.505333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:19.004866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:19.004945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:19.005275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:19.505078   32280 type.go:168] "Request Body" body=""
	I1002 20:14:19.505155   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:19.505483   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:20.004994   32280 type.go:168] "Request Body" body=""
	I1002 20:14:20.005076   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:20.005381   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:20.505242   32280 type.go:168] "Request Body" body=""
	I1002 20:14:20.505307   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:20.505631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:20.505718   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:21.005226   32280 type.go:168] "Request Body" body=""
	I1002 20:14:21.005289   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:21.005590   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:21.505335   32280 type.go:168] "Request Body" body=""
	I1002 20:14:21.505404   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:21.505749   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:22.005375   32280 type.go:168] "Request Body" body=""
	I1002 20:14:22.005439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:22.005744   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:22.505304   32280 type.go:168] "Request Body" body=""
	I1002 20:14:22.505371   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:22.505716   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:22.505771   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:23.005272   32280 type.go:168] "Request Body" body=""
	I1002 20:14:23.005334   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:23.005644   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:23.505227   32280 type.go:168] "Request Body" body=""
	I1002 20:14:23.505324   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:23.505721   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:24.005280   32280 type.go:168] "Request Body" body=""
	I1002 20:14:24.005348   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:24.005690   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:24.505614   32280 type.go:168] "Request Body" body=""
	I1002 20:14:24.505707   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:24.506064   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:24.506123   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:25.005722   32280 type.go:168] "Request Body" body=""
	I1002 20:14:25.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:25.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:25.505754   32280 type.go:168] "Request Body" body=""
	I1002 20:14:25.505821   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:25.506147   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:26.005768   32280 type.go:168] "Request Body" body=""
	I1002 20:14:26.005838   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:26.006153   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:26.505742   32280 type.go:168] "Request Body" body=""
	I1002 20:14:26.505810   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:26.506121   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:26.506173   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:27.005763   32280 type.go:168] "Request Body" body=""
	I1002 20:14:27.005839   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:27.006182   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:27.505814   32280 type.go:168] "Request Body" body=""
	I1002 20:14:27.505878   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:27.506202   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:28.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:28.005938   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:28.006243   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:28.504821   32280 type.go:168] "Request Body" body=""
	I1002 20:14:28.504889   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:28.505244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:29.005929   32280 type.go:168] "Request Body" body=""
	I1002 20:14:29.005998   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:29.006317   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:29.006373   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:29.505885   32280 type.go:168] "Request Body" body=""
	I1002 20:14:29.505955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:29.506284   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:30.004871   32280 type.go:168] "Request Body" body=""
	I1002 20:14:30.004946   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:30.005283   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:30.505131   32280 type.go:168] "Request Body" body=""
	I1002 20:14:30.505212   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:30.505536   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:31.005137   32280 type.go:168] "Request Body" body=""
	I1002 20:14:31.005230   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:31.005549   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:31.505115   32280 type.go:168] "Request Body" body=""
	I1002 20:14:31.505177   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:31.505493   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:31.505544   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:32.005077   32280 type.go:168] "Request Body" body=""
	I1002 20:14:32.005142   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:32.005447   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:32.505767   32280 type.go:168] "Request Body" body=""
	I1002 20:14:32.505835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:32.506138   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:33.005842   32280 type.go:168] "Request Body" body=""
	I1002 20:14:33.005927   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:33.006231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:33.505868   32280 type.go:168] "Request Body" body=""
	I1002 20:14:33.505947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:33.506252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:33.506315   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:34.004818   32280 type.go:168] "Request Body" body=""
	I1002 20:14:34.004919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:34.005210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:34.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:14:34.505008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:34.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:35.004949   32280 type.go:168] "Request Body" body=""
	I1002 20:14:35.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:35.005319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:35.505837   32280 type.go:168] "Request Body" body=""
	I1002 20:14:35.505935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:35.506248   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:36.005867   32280 type.go:168] "Request Body" body=""
	I1002 20:14:36.005936   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:36.006232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:36.006283   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:36.505902   32280 type.go:168] "Request Body" body=""
	I1002 20:14:36.506056   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:36.506384   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:37.004951   32280 type.go:168] "Request Body" body=""
	I1002 20:14:37.005021   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:37.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:37.504906   32280 type.go:168] "Request Body" body=""
	I1002 20:14:37.504995   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:37.505334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:38.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:14:38.004944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:38.005255   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:38.504831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:38.504917   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:38.505277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:38.505331   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:39.004819   32280 type.go:168] "Request Body" body=""
	I1002 20:14:39.004911   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:39.005204   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:39.505017   32280 type.go:168] "Request Body" body=""
	I1002 20:14:39.505087   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:39.505399   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:40.005080   32280 type.go:168] "Request Body" body=""
	I1002 20:14:40.005144   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:40.005445   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:40.505248   32280 type.go:168] "Request Body" body=""
	I1002 20:14:40.505310   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:40.505614   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:40.505711   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:41.005196   32280 type.go:168] "Request Body" body=""
	I1002 20:14:41.005309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:41.005631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:41.505223   32280 type.go:168] "Request Body" body=""
	I1002 20:14:41.505304   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:41.505623   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:42.005154   32280 type.go:168] "Request Body" body=""
	I1002 20:14:42.005238   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:42.005535   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:42.505095   32280 type.go:168] "Request Body" body=""
	I1002 20:14:42.505175   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:42.505514   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:43.005064   32280 type.go:168] "Request Body" body=""
	I1002 20:14:43.005128   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:43.005441   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:43.005493   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:43.504991   32280 type.go:168] "Request Body" body=""
	I1002 20:14:43.505079   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:43.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:44.004948   32280 type.go:168] "Request Body" body=""
	I1002 20:14:44.005018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:44.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:44.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:14:44.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:44.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:45.004946   32280 type.go:168] "Request Body" body=""
	I1002 20:14:45.005005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:45.005307   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:45.504859   32280 type.go:168] "Request Body" body=""
	I1002 20:14:45.504931   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:45.505245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:45.505309   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:46.005851   32280 type.go:168] "Request Body" body=""
	I1002 20:14:46.005934   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:46.006245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:46.505842   32280 type.go:168] "Request Body" body=""
	I1002 20:14:46.505929   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:46.506226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:47.005902   32280 type.go:168] "Request Body" body=""
	I1002 20:14:47.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:47.006270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:47.504848   32280 type.go:168] "Request Body" body=""
	I1002 20:14:47.504912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:47.505208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:48.005819   32280 type.go:168] "Request Body" body=""
	I1002 20:14:48.005910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:48.006200   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:48.006262   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:48.504839   32280 type.go:168] "Request Body" body=""
	I1002 20:14:48.504925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:48.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:49.004816   32280 type.go:168] "Request Body" body=""
	I1002 20:14:49.004911   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:49.005214   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:49.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:14:49.505022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:49.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:50.004888   32280 type.go:168] "Request Body" body=""
	I1002 20:14:50.004963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:50.005258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:50.505167   32280 type.go:168] "Request Body" body=""
	I1002 20:14:50.505271   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:50.505603   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:50.505700   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:51.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:51.005941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:51.006228   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:51.505859   32280 type.go:168] "Request Body" body=""
	I1002 20:14:51.505973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:51.506301   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:52.004831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:52.004912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:52.005216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:52.504814   32280 type.go:168] "Request Body" body=""
	I1002 20:14:52.504898   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:52.505216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:53.005826   32280 type.go:168] "Request Body" body=""
	I1002 20:14:53.005886   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:53.006180   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:53.006232   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:53.505812   32280 type.go:168] "Request Body" body=""
	I1002 20:14:53.505888   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:53.506201   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:54.005808   32280 type.go:168] "Request Body" body=""
	I1002 20:14:54.005871   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:54.006166   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:54.504871   32280 type.go:168] "Request Body" body=""
	I1002 20:14:54.504938   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:54.505247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:55.004807   32280 type.go:168] "Request Body" body=""
	I1002 20:14:55.004892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:55.005219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:55.505889   32280 type.go:168] "Request Body" body=""
	I1002 20:14:55.505973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:55.506277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:55.506339   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:56.004856   32280 type.go:168] "Request Body" body=""
	I1002 20:14:56.004932   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:56.005222   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:56.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:14:56.504963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:56.505264   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:57.004822   32280 type.go:168] "Request Body" body=""
	I1002 20:14:57.004940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:57.005238   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:57.505875   32280 type.go:168] "Request Body" body=""
	I1002 20:14:57.505940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:57.506273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:58.005858   32280 type.go:168] "Request Body" body=""
	I1002 20:14:58.005932   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:58.006233   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:58.006297   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:58.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:14:58.504910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:58.505221   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:59.005853   32280 type.go:168] "Request Body" body=""
	I1002 20:14:59.005916   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:59.006215   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:59.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:14:59.505079   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:59.505422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:00.005901   32280 type.go:168] "Request Body" body=""
	I1002 20:15:00.005989   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:00.006298   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:00.006348   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:00.505148   32280 type.go:168] "Request Body" body=""
	I1002 20:15:00.505241   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:00.505605   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:01.005169   32280 type.go:168] "Request Body" body=""
	I1002 20:15:01.005247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:01.005557   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:01.505254   32280 type.go:168] "Request Body" body=""
	I1002 20:15:01.505323   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:01.505705   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:02.004981   32280 type.go:168] "Request Body" body=""
	I1002 20:15:02.005068   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:02.005397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:02.505008   32280 type.go:168] "Request Body" body=""
	I1002 20:15:02.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:02.505394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:02.505450   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:03.004993   32280 type.go:168] "Request Body" body=""
	I1002 20:15:03.005058   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:03.005394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:03.504950   32280 type.go:168] "Request Body" body=""
	I1002 20:15:03.505020   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:03.505326   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:04.004918   32280 type.go:168] "Request Body" body=""
	I1002 20:15:04.004994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:04.005296   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:04.504973   32280 type.go:168] "Request Body" body=""
	I1002 20:15:04.505039   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:04.505347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:05.004936   32280 type.go:168] "Request Body" body=""
	I1002 20:15:05.005003   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:05.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:05.005362   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:05.504869   32280 type.go:168] "Request Body" body=""
	I1002 20:15:05.504948   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:05.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:06.004882   32280 type.go:168] "Request Body" body=""
	I1002 20:15:06.004968   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:06.005279   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:06.504947   32280 type.go:168] "Request Body" body=""
	I1002 20:15:06.505019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:06.505334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:07.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:15:07.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:07.005310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:07.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:15:07.505006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:07.505377   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:07.505433   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:08.004961   32280 type.go:168] "Request Body" body=""
	I1002 20:15:08.005028   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:08.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:08.504957   32280 type.go:168] "Request Body" body=""
	I1002 20:15:08.505046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:08.505358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:09.004947   32280 type.go:168] "Request Body" body=""
	I1002 20:15:09.005025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:09.005346   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:09.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:15:09.505247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:09.505575   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:09.505626   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:10.005155   32280 type.go:168] "Request Body" body=""
	I1002 20:15:10.005219   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:10.005531   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:10.505400   32280 type.go:168] "Request Body" body=""
	I1002 20:15:10.505469   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:10.505813   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:11.005490   32280 type.go:168] "Request Body" body=""
	I1002 20:15:11.005553   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:11.005896   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:11.505548   32280 type.go:168] "Request Body" body=""
	I1002 20:15:11.505612   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:11.505961   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:11.506027   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:12.005617   32280 type.go:168] "Request Body" body=""
	I1002 20:15:12.005691   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:12.005983   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:12.505700   32280 type.go:168] "Request Body" body=""
	I1002 20:15:12.505770   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:12.506098   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:13.005755   32280 type.go:168] "Request Body" body=""
	I1002 20:15:13.005828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:13.006168   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:13.505836   32280 type.go:168] "Request Body" body=""
	I1002 20:15:13.505920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:13.506241   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:13.506290   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:14.005887   32280 type.go:168] "Request Body" body=""
	I1002 20:15:14.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:14.006270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:14.505064   32280 type.go:168] "Request Body" body=""
	I1002 20:15:14.505129   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:14.505450   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:15.004995   32280 type.go:168] "Request Body" body=""
	I1002 20:15:15.005063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:15.005377   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:15.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:15:15.504986   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:15.505294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:16.004941   32280 type.go:168] "Request Body" body=""
	I1002 20:15:16.005008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:16.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:16.005376   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:16.504960   32280 type.go:168] "Request Body" body=""
	I1002 20:15:16.505033   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:16.505386   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:17.005033   32280 type.go:168] "Request Body" body=""
	I1002 20:15:17.005095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:17.005406   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:17.504971   32280 type.go:168] "Request Body" body=""
	I1002 20:15:17.505037   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:17.505351   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:18.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:15:18.005879   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:18.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:18.006247   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:18.505849   32280 type.go:168] "Request Body" body=""
	I1002 20:15:18.505919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:18.506247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:19.004886   32280 type.go:168] "Request Body" body=""
	I1002 20:15:19.004961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:19.005261   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:19.505076   32280 type.go:168] "Request Body" body=""
	I1002 20:15:19.505144   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:19.505477   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:20.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:15:20.005071   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:20.005381   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:20.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:15:20.505251   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:20.505582   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:20.505635   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:21.004962   32280 type.go:168] "Request Body" body=""
	I1002 20:15:21.005029   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:21.005332   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:21.504914   32280 type.go:168] "Request Body" body=""
	I1002 20:15:21.504977   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:21.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:22.004889   32280 type.go:168] "Request Body" body=""
	I1002 20:15:22.004987   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:22.005283   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:22.504874   32280 type.go:168] "Request Body" body=""
	I1002 20:15:22.504937   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:22.505267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:23.004838   32280 type.go:168] "Request Body" body=""
	I1002 20:15:23.004900   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:23.005227   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:23.005283   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:23.505836   32280 type.go:168] "Request Body" body=""
	I1002 20:15:23.505908   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:23.506231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:24.005841   32280 type.go:168] "Request Body" body=""
	I1002 20:15:24.005903   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:24.006198   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:24.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:15:24.505059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:24.505375   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:25.004926   32280 type.go:168] "Request Body" body=""
	I1002 20:15:25.005003   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:25.005304   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:25.005362   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:25.504905   32280 type.go:168] "Request Body" body=""
	I1002 20:15:25.504971   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:25.505275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:26.004817   32280 type.go:168] "Request Body" body=""
	I1002 20:15:26.004887   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:26.005210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:26.505879   32280 type.go:168] "Request Body" body=""
	I1002 20:15:26.506038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:26.506430   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:27.005027   32280 type.go:168] "Request Body" body=""
	I1002 20:15:27.005114   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:27.005415   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:27.005474   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:27.505002   32280 type.go:168] "Request Body" body=""
	I1002 20:15:27.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:27.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:28.004986   32280 type.go:168] "Request Body" body=""
	I1002 20:15:28.005053   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:28.005352   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:28.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:15:28.505000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:28.505364   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:29.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:15:29.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:29.005308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:29.505191   32280 type.go:168] "Request Body" body=""
	I1002 20:15:29.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:29.505637   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:29.505741   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:30.005210   32280 type.go:168] "Request Body" body=""
	I1002 20:15:30.005271   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:30.005562   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:30.505505   32280 type.go:168] "Request Body" body=""
	I1002 20:15:30.505575   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:30.505938   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:31.005554   32280 type.go:168] "Request Body" body=""
	I1002 20:15:31.005640   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:31.005967   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:31.505585   32280 type.go:168] "Request Body" body=""
	I1002 20:15:31.505683   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:31.506006   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:31.506056   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:32.005634   32280 type.go:168] "Request Body" body=""
	I1002 20:15:32.005710   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:32.006002   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:32.505666   32280 type.go:168] "Request Body" body=""
	I1002 20:15:32.505734   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:32.506032   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:33.005694   32280 type.go:168] "Request Body" body=""
	I1002 20:15:33.005768   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:33.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:33.505738   32280 type.go:168] "Request Body" body=""
	I1002 20:15:33.505801   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:33.506120   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:33.506192   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:34.005749   32280 type.go:168] "Request Body" body=""
	I1002 20:15:34.005835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:34.006190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:34.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:15:34.505063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:34.505359   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:35.004979   32280 type.go:168] "Request Body" body=""
	I1002 20:15:35.005040   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:35.005361   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:35.504958   32280 type.go:168] "Request Body" body=""
	I1002 20:15:35.505028   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:35.505325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:36.004893   32280 type.go:168] "Request Body" body=""
	I1002 20:15:36.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:36.005275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:36.005327   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:36.504861   32280 type.go:168] "Request Body" body=""
	I1002 20:15:36.504942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:36.505241   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:37.004818   32280 type.go:168] "Request Body" body=""
	I1002 20:15:37.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:37.005203   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:37.504876   32280 type.go:168] "Request Body" body=""
	I1002 20:15:37.504951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:37.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:38.004888   32280 type.go:168] "Request Body" body=""
	I1002 20:15:38.004979   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:38.005286   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:38.504969   32280 type.go:168] "Request Body" body=""
	I1002 20:15:38.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:38.505376   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:38.505429   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:39.004950   32280 type.go:168] "Request Body" body=""
	I1002 20:15:39.005018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:39.005330   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:39.505071   32280 type.go:168] "Request Body" body=""
	I1002 20:15:39.505137   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:39.505431   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:40.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:15:40.005090   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:40.005385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:40.505098   32280 type.go:168] "Request Body" body=""
	I1002 20:15:40.505197   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:40.505502   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:40.505558   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:41.005068   32280 type.go:168] "Request Body" body=""
	I1002 20:15:41.005132   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:41.005435   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:41.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:15:41.505067   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:41.505459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:42.005029   32280 type.go:168] "Request Body" body=""
	I1002 20:15:42.005101   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:42.005410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:42.505061   32280 type.go:168] "Request Body" body=""
	I1002 20:15:42.505128   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:42.505440   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:43.005053   32280 type.go:168] "Request Body" body=""
	I1002 20:15:43.005164   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:43.005534   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:43.005626   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:43.505101   32280 type.go:168] "Request Body" body=""
	I1002 20:15:43.505195   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:43.505496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:44.005084   32280 type.go:168] "Request Body" body=""
	I1002 20:15:44.005178   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:44.005496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:44.505460   32280 type.go:168] "Request Body" body=""
	I1002 20:15:44.505524   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:44.505855   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:45.005560   32280 type.go:168] "Request Body" body=""
	I1002 20:15:45.005631   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:45.005984   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:45.006035   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:45.505602   32280 type.go:168] "Request Body" body=""
	I1002 20:15:45.505705   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:45.506005   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:46.005627   32280 type.go:168] "Request Body" body=""
	I1002 20:15:46.005713   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:46.006024   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:46.505689   32280 type.go:168] "Request Body" body=""
	I1002 20:15:46.505755   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:46.506045   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:47.005272   32280 type.go:168] "Request Body" body=""
	I1002 20:15:47.005340   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:47.005666   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:47.505213   32280 type.go:168] "Request Body" body=""
	I1002 20:15:47.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:47.505638   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:47.505724   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:48.004992   32280 type.go:168] "Request Body" body=""
	I1002 20:15:48.005062   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:48.005371   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:48.504960   32280 type.go:168] "Request Body" body=""
	I1002 20:15:48.505025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:48.505343   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:49.004918   32280 type.go:168] "Request Body" body=""
	I1002 20:15:49.004982   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:49.005325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:49.505056   32280 type.go:168] "Request Body" body=""
	I1002 20:15:49.505122   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:49.505424   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:50.004984   32280 type.go:168] "Request Body" body=""
	I1002 20:15:50.005048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:50.005347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:50.005399   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:50.505099   32280 type.go:168] "Request Body" body=""
	I1002 20:15:50.505173   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:50.505478   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:51.005059   32280 type.go:168] "Request Body" body=""
	I1002 20:15:51.005133   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:51.005463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:51.505016   32280 type.go:168] "Request Body" body=""
	I1002 20:15:51.505084   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:51.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:52.005067   32280 type.go:168] "Request Body" body=""
	I1002 20:15:52.005155   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:52.005476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:52.005533   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:52.505040   32280 type.go:168] "Request Body" body=""
	I1002 20:15:52.505105   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:52.505403   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:53.004962   32280 type.go:168] "Request Body" body=""
	I1002 20:15:53.005025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:53.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:53.504924   32280 type.go:168] "Request Body" body=""
	I1002 20:15:53.505008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:53.505327   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:54.004900   32280 type.go:168] "Request Body" body=""
	I1002 20:15:54.004970   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:54.005314   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:54.505066   32280 type.go:168] "Request Body" body=""
	I1002 20:15:54.505137   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:54.505438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:54.505496   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:55.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:15:55.005067   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:55.005372   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:55.504901   32280 type.go:168] "Request Body" body=""
	I1002 20:15:55.504971   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:55.505282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:56.004915   32280 type.go:168] "Request Body" body=""
	I1002 20:15:56.004985   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:56.005314   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:56.504880   32280 type.go:168] "Request Body" body=""
	I1002 20:15:56.504955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:56.505267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:57.004835   32280 type.go:168] "Request Body" body=""
	I1002 20:15:57.004920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:57.005242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:57.005291   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:57.505877   32280 type.go:168] "Request Body" body=""
	I1002 20:15:57.505940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:57.506245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:58.005907   32280 type.go:168] "Request Body" body=""
	I1002 20:15:58.005991   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:58.006342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:58.504964   32280 type.go:168] "Request Body" body=""
	I1002 20:15:58.505032   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:58.505329   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:59.004907   32280 type.go:168] "Request Body" body=""
	I1002 20:15:59.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:59.005333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:59.005397   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:59.505208   32280 type.go:168] "Request Body" body=""
	I1002 20:15:59.505273   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:59.505578   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:00.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:16:00.005070   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:00.005368   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:00.505154   32280 type.go:168] "Request Body" body=""
	I1002 20:16:00.505223   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:00.505548   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:01.005111   32280 type.go:168] "Request Body" body=""
	I1002 20:16:01.005187   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:01.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:01.005546   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:01.505154   32280 type.go:168] "Request Body" body=""
	I1002 20:16:01.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:01.505529   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:02.005146   32280 type.go:168] "Request Body" body=""
	I1002 20:16:02.005224   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:02.005550   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:02.505113   32280 type.go:168] "Request Body" body=""
	I1002 20:16:02.505181   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:02.505501   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:03.005066   32280 type.go:168] "Request Body" body=""
	I1002 20:16:03.005132   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:03.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:03.505093   32280 type.go:168] "Request Body" body=""
	I1002 20:16:03.505162   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:03.505508   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:03.505564   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:04.005055   32280 type.go:168] "Request Body" body=""
	I1002 20:16:04.005119   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:04.005406   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:04.505180   32280 type.go:168] "Request Body" body=""
	I1002 20:16:04.505248   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:04.505566   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:05.005130   32280 type.go:168] "Request Body" body=""
	I1002 20:16:05.005192   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:05.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:05.505063   32280 type.go:168] "Request Body" body=""
	I1002 20:16:05.505131   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:05.505442   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:06.005022   32280 type.go:168] "Request Body" body=""
	I1002 20:16:06.005086   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:06.005392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:06.005444   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:06.505030   32280 type.go:168] "Request Body" body=""
	I1002 20:16:06.505095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:06.505395   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:07.004971   32280 type.go:168] "Request Body" body=""
	I1002 20:16:07.005038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:07.005337   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:07.504911   32280 type.go:168] "Request Body" body=""
	I1002 20:16:07.505004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:07.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:08.004917   32280 type.go:168] "Request Body" body=""
	I1002 20:16:08.004990   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:08.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:08.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:16:08.504958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:08.505256   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:08.505311   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:09.005884   32280 type.go:168] "Request Body" body=""
	I1002 20:16:09.005950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:09.006258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:09.505071   32280 type.go:168] "Request Body" body=""
	I1002 20:16:09.505141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:09.505485   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:10.005085   32280 type.go:168] "Request Body" body=""
	I1002 20:16:10.005150   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:10.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:10.505286   32280 type.go:168] "Request Body" body=""
	I1002 20:16:10.505357   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:10.505685   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:10.505751   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:11.005245   32280 type.go:168] "Request Body" body=""
	I1002 20:16:11.005311   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:11.005606   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:11.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:16:11.505245   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:11.505547   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:12.005105   32280 type.go:168] "Request Body" body=""
	I1002 20:16:12.005169   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:12.005459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:12.505029   32280 type.go:168] "Request Body" body=""
	I1002 20:16:12.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:12.505392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:13.005040   32280 type.go:168] "Request Body" body=""
	I1002 20:16:13.005104   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:13.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:13.005474   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:13.504990   32280 type.go:168] "Request Body" body=""
	I1002 20:16:13.505055   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:13.505357   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:14.004946   32280 type.go:168] "Request Body" body=""
	I1002 20:16:14.005015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:14.005324   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:14.505076   32280 type.go:168] "Request Body" body=""
	I1002 20:16:14.505142   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:14.505433   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:15.005063   32280 type.go:168] "Request Body" body=""
	I1002 20:16:15.005134   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:15.005446   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:15.005498   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:15.504952   32280 type.go:168] "Request Body" body=""
	I1002 20:16:15.505022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:15.505328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:16.004912   32280 type.go:168] "Request Body" body=""
	I1002 20:16:16.004990   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:16.005339   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:16.505464   32280 type.go:168] "Request Body" body=""
	I1002 20:16:16.505571   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:16.505963   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:17.005818   32280 type.go:168] "Request Body" body=""
	I1002 20:16:17.005930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:17.006240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:17.006295   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:17.504827   32280 type.go:168] "Request Body" body=""
	I1002 20:16:17.504891   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:17.505213   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:18.005877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:18.005946   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:18.006281   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:18.505257   32280 type.go:168] "Request Body" body=""
	I1002 20:16:18.505334   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:18.505711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:19.005252   32280 type.go:168] "Request Body" body=""
	I1002 20:16:19.005317   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:19.005634   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:19.505459   32280 type.go:168] "Request Body" body=""
	I1002 20:16:19.505521   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:19.505917   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:19.505979   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:20.005531   32280 type.go:168] "Request Body" body=""
	I1002 20:16:20.005594   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:20.005938   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:20.505740   32280 type.go:168] "Request Body" body=""
	I1002 20:16:20.505803   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:20.506120   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:21.005728   32280 type.go:168] "Request Body" body=""
	I1002 20:16:21.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:21.006134   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:21.505734   32280 type.go:168] "Request Body" body=""
	I1002 20:16:21.505799   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:21.506152   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:21.506214   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:22.005776   32280 type.go:168] "Request Body" body=""
	I1002 20:16:22.005835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:22.006129   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:22.505854   32280 type.go:168] "Request Body" body=""
	I1002 20:16:22.505921   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:22.506271   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:23.004819   32280 type.go:168] "Request Body" body=""
	I1002 20:16:23.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:23.005226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:23.504886   32280 type.go:168] "Request Body" body=""
	I1002 20:16:23.504953   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:23.505310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:24.004892   32280 type.go:168] "Request Body" body=""
	I1002 20:16:24.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:24.005258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:24.005327   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:24.505080   32280 type.go:168] "Request Body" body=""
	I1002 20:16:24.505161   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:24.505504   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:25.005053   32280 type.go:168] "Request Body" body=""
	I1002 20:16:25.005119   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:25.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:25.505026   32280 type.go:168] "Request Body" body=""
	I1002 20:16:25.505087   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:25.505410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:26.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:16:26.005021   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:26.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:26.005378   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:26.504910   32280 type.go:168] "Request Body" body=""
	I1002 20:16:26.504977   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:26.505326   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:27.005842   32280 type.go:168] "Request Body" body=""
	I1002 20:16:27.005906   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:27.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:27.505877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:27.505952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:27.506276   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:28.004832   32280 type.go:168] "Request Body" body=""
	I1002 20:16:28.004908   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:28.005212   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:28.505846   32280 type.go:168] "Request Body" body=""
	I1002 20:16:28.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:28.506279   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:28.506330   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:29.004829   32280 type.go:168] "Request Body" body=""
	I1002 20:16:29.004904   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:29.005217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:29.505056   32280 type.go:168] "Request Body" body=""
	I1002 20:16:29.505125   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:29.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:30.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:16:30.005075   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:30.005370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:30.505105   32280 type.go:168] "Request Body" body=""
	I1002 20:16:30.505170   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:30.505455   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:31.005091   32280 type.go:168] "Request Body" body=""
	I1002 20:16:31.005160   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:31.005463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:31.005521   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:31.504995   32280 type.go:168] "Request Body" body=""
	I1002 20:16:31.505061   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:31.505362   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:32.005845   32280 type.go:168] "Request Body" body=""
	I1002 20:16:32.005909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:32.006188   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:32.505814   32280 type.go:168] "Request Body" body=""
	I1002 20:16:32.505878   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:32.506185   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:33.005817   32280 type.go:168] "Request Body" body=""
	I1002 20:16:33.005884   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:33.006190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:33.006257   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:33.505816   32280 type.go:168] "Request Body" body=""
	I1002 20:16:33.505892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:33.506205   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:34.005835   32280 type.go:168] "Request Body" body=""
	I1002 20:16:34.005898   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:34.006219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:34.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:16:34.504996   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:34.505358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:35.004928   32280 type.go:168] "Request Body" body=""
	I1002 20:16:35.005004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:35.005345   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:35.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:16:35.504994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:35.505319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:35.505372   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:36.004925   32280 type.go:168] "Request Body" body=""
	I1002 20:16:36.004992   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:36.005316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:36.504877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:36.504954   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:36.505294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:37.005839   32280 type.go:168] "Request Body" body=""
	I1002 20:16:37.005910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:37.006248   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:37.505873   32280 type.go:168] "Request Body" body=""
	I1002 20:16:37.505941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:37.506266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:37.506318   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:38.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:16:38.005944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:38.006246   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:38.504902   32280 type.go:168] "Request Body" body=""
	I1002 20:16:38.504969   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:38.505303   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:39.004874   32280 type.go:168] "Request Body" body=""
	I1002 20:16:39.004947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:39.005260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:39.505046   32280 type.go:168] "Request Body" body=""
	I1002 20:16:39.505118   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:39.505463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:40.004989   32280 type.go:168] "Request Body" body=""
	I1002 20:16:40.005054   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:40.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:40.005393   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:40.505166   32280 type.go:168] "Request Body" body=""
	I1002 20:16:40.505235   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:40.505560   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:41.005152   32280 type.go:168] "Request Body" body=""
	I1002 20:16:41.005218   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:41.005554   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:41.505090   32280 type.go:168] "Request Body" body=""
	I1002 20:16:41.505158   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:41.505444   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:42.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:16:42.005138   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:42.005449   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:42.005504   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:42.505067   32280 type.go:168] "Request Body" body=""
	I1002 20:16:42.505134   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:42.505424   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:43.004978   32280 type.go:168] "Request Body" body=""
	I1002 20:16:43.005045   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:43.005360   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:43.504918   32280 type.go:168] "Request Body" body=""
	I1002 20:16:43.504994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:43.505315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:44.004897   32280 type.go:168] "Request Body" body=""
	I1002 20:16:44.004973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:44.005278   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:44.505052   32280 type.go:168] "Request Body" body=""
	I1002 20:16:44.505115   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:44.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:44.505478   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:45.004947   32280 type.go:168] "Request Body" body=""
	I1002 20:16:45.005019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:45.005322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:45.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:16:45.504993   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:45.505338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:46.004905   32280 type.go:168] "Request Body" body=""
	I1002 20:16:46.004979   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:46.005286   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:46.504835   32280 type.go:168] "Request Body" body=""
	I1002 20:16:46.504925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:46.505219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:47.005826   32280 type.go:168] "Request Body" body=""
	I1002 20:16:47.005892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:47.006200   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:47.006269   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:47.505816   32280 type.go:168] "Request Body" body=""
	I1002 20:16:47.505884   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:47.506197   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:48.005806   32280 type.go:168] "Request Body" body=""
	I1002 20:16:48.005870   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:48.006179   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:48.505827   32280 type.go:168] "Request Body" body=""
	I1002 20:16:48.505888   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:48.506194   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:49.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:16:49.005894   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:49.006203   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:49.504963   32280 type.go:168] "Request Body" body=""
	I1002 20:16:49.505034   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:49.505380   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:49.505431   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:50.004940   32280 type.go:168] "Request Body" body=""
	I1002 20:16:50.005017   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:50.005304   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:50.505134   32280 type.go:168] "Request Body" body=""
	I1002 20:16:50.505201   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:50.505531   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:51.005099   32280 type.go:168] "Request Body" body=""
	I1002 20:16:51.005174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:51.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:51.505049   32280 type.go:168] "Request Body" body=""
	I1002 20:16:51.505116   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:51.505426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:51.505479   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:52.005030   32280 type.go:168] "Request Body" body=""
	I1002 20:16:52.005095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:52.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:52.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:16:52.505051   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:52.505356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:53.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:16:53.005183   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:53.005527   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:53.504966   32280 type.go:168] "Request Body" body=""
	I1002 20:16:53.505048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:53.505395   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:54.004967   32280 type.go:168] "Request Body" body=""
	I1002 20:16:54.005059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:54.005352   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:54.005404   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:54.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:16:54.505052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:54.505382   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:55.005054   32280 type.go:168] "Request Body" body=""
	I1002 20:16:55.005127   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:55.005439   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:55.505031   32280 type.go:168] "Request Body" body=""
	I1002 20:16:55.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:55.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:56.004980   32280 type.go:168] "Request Body" body=""
	I1002 20:16:56.005046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:56.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:56.504959   32280 type.go:168] "Request Body" body=""
	I1002 20:16:56.505036   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:56.505388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:56.505446   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:57.004963   32280 type.go:168] "Request Body" body=""
	I1002 20:16:57.005036   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:57.005356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:57.504925   32280 type.go:168] "Request Body" body=""
	I1002 20:16:57.505000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:57.505300   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:58.004883   32280 type.go:168] "Request Body" body=""
	I1002 20:16:58.004958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:58.005266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:58.504830   32280 type.go:168] "Request Body" body=""
	I1002 20:16:58.504897   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:58.505217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:59.005867   32280 type.go:168] "Request Body" body=""
	I1002 20:16:59.005934   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:59.006232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:59.006289   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:59.505004   32280 type.go:168] "Request Body" body=""
	I1002 20:16:59.505077   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:59.505422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:00.004977   32280 type.go:168] "Request Body" body=""
	I1002 20:17:00.005041   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:00.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:00.505177   32280 type.go:168] "Request Body" body=""
	I1002 20:17:00.505244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:00.505577   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:01.005106   32280 type.go:168] "Request Body" body=""
	I1002 20:17:01.005177   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:01.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:01.505109   32280 type.go:168] "Request Body" body=""
	I1002 20:17:01.505191   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:01.505585   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:01.505680   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:02.005132   32280 type.go:168] "Request Body" body=""
	I1002 20:17:02.005199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:02.005526   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:02.505094   32280 type.go:168] "Request Body" body=""
	I1002 20:17:02.505168   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:02.505564   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:03.005060   32280 type.go:168] "Request Body" body=""
	I1002 20:17:03.005126   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:03.005440   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:03.504982   32280 type.go:168] "Request Body" body=""
	I1002 20:17:03.505055   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:03.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:04.004973   32280 type.go:168] "Request Body" body=""
	I1002 20:17:04.005038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:04.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:04.005404   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:04.505123   32280 type.go:168] "Request Body" body=""
	I1002 20:17:04.505251   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:04.505555   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:05.005089   32280 type.go:168] "Request Body" body=""
	I1002 20:17:05.005151   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:05.005451   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:05.505031   32280 type.go:168] "Request Body" body=""
	I1002 20:17:05.505104   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:05.505423   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:06.004950   32280 type.go:168] "Request Body" body=""
	I1002 20:17:06.005039   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:06.005333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:06.504958   32280 type.go:168] "Request Body" body=""
	I1002 20:17:06.505029   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:06.505369   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:06.505429   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:07.004923   32280 type.go:168] "Request Body" body=""
	I1002 20:17:07.004993   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:07.005301   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:07.504862   32280 type.go:168] "Request Body" body=""
	I1002 20:17:07.504930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:07.505255   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:08.004807   32280 type.go:168] "Request Body" body=""
	I1002 20:17:08.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:08.005186   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:08.505831   32280 type.go:168] "Request Body" body=""
	I1002 20:17:08.505899   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:08.506230   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:08.506299   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:09.005828   32280 type.go:168] "Request Body" body=""
	I1002 20:17:09.005891   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:09.006223   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:09.505024   32280 type.go:168] "Request Body" body=""
	I1002 20:17:09.505092   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:09.505459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:10.005013   32280 type.go:168] "Request Body" body=""
	I1002 20:17:10.005077   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:10.005388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:10.505140   32280 type.go:168] "Request Body" body=""
	I1002 20:17:10.505212   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:10.505598   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:11.005128   32280 type.go:168] "Request Body" body=""
	I1002 20:17:11.005195   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:11.005534   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:11.005597   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:11.505120   32280 type.go:168] "Request Body" body=""
	I1002 20:17:11.505189   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:11.505524   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:12.005153   32280 type.go:168] "Request Body" body=""
	I1002 20:17:12.005225   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:12.005562   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:12.505110   32280 type.go:168] "Request Body" body=""
	I1002 20:17:12.505174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:12.505532   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:13.005106   32280 type.go:168] "Request Body" body=""
	I1002 20:17:13.005174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:13.005476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:13.505007   32280 type.go:168] "Request Body" body=""
	I1002 20:17:13.505068   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:13.505435   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:13.505488   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:14.005005   32280 type.go:168] "Request Body" body=""
	I1002 20:17:14.005066   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:14.005383   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:14.505172   32280 type.go:168] "Request Body" body=""
	I1002 20:17:14.505244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:14.505573   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:15.005134   32280 type.go:168] "Request Body" body=""
	I1002 20:17:15.005205   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:15.005503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:15.505066   32280 type.go:168] "Request Body" body=""
	I1002 20:17:15.505141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:15.505446   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:15.505511   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:16.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:17:16.005080   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:16.005386   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:16.504935   32280 type.go:168] "Request Body" body=""
	I1002 20:17:16.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:16.505327   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:17.004855   32280 type.go:168] "Request Body" body=""
	I1002 20:17:17.004919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:17.005223   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:17.505899   32280 type.go:168] "Request Body" body=""
	I1002 20:17:17.505967   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:17.506302   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:17.506357   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:18.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:18.004943   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:18.005245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:18.504839   32280 type.go:168] "Request Body" body=""
	I1002 20:17:18.504915   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:18.505232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:19.005865   32280 type.go:168] "Request Body" body=""
	I1002 20:17:19.005947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:19.006269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:19.505022   32280 type.go:168] "Request Body" body=""
	I1002 20:17:19.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:19.505407   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:20.004991   32280 type.go:168] "Request Body" body=""
	I1002 20:17:20.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:20.005405   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:20.005466   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:20.505228   32280 type.go:168] "Request Body" body=""
	I1002 20:17:20.505297   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:20.505591   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:21.005210   32280 type.go:168] "Request Body" body=""
	I1002 20:17:21.005276   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:21.005584   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:21.505147   32280 type.go:168] "Request Body" body=""
	I1002 20:17:21.505208   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:21.505526   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:22.005059   32280 type.go:168] "Request Body" body=""
	I1002 20:17:22.005124   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:22.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:22.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:17:22.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:22.505347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:22.505407   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:23.004930   32280 type.go:168] "Request Body" body=""
	I1002 20:17:23.005006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:23.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:23.504881   32280 type.go:168] "Request Body" body=""
	I1002 20:17:23.504945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:23.505245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:24.005892   32280 type.go:168] "Request Body" body=""
	I1002 20:17:24.005969   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:24.006315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:24.505033   32280 type.go:168] "Request Body" body=""
	I1002 20:17:24.505105   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:24.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:24.505472   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:25.004948   32280 type.go:168] "Request Body" body=""
	I1002 20:17:25.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:25.005380   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:25.504947   32280 type.go:168] "Request Body" body=""
	I1002 20:17:25.505016   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:25.505308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:26.004843   32280 type.go:168] "Request Body" body=""
	I1002 20:17:26.004909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:26.005238   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:26.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:17:26.504873   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:26.505173   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:27.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:17:27.005931   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:27.006247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:27.006305   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:27.505850   32280 type.go:168] "Request Body" body=""
	I1002 20:17:27.505914   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:27.506242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:28.004933   32280 type.go:168] "Request Body" body=""
	I1002 20:17:28.005009   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:28.005342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:28.504866   32280 type.go:168] "Request Body" body=""
	I1002 20:17:28.505005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:28.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:29.004896   32280 type.go:168] "Request Body" body=""
	I1002 20:17:29.004966   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:29.005261   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:29.505004   32280 type.go:168] "Request Body" body=""
	I1002 20:17:29.505069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:29.505365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:29.505422   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:30.004927   32280 type.go:168] "Request Body" body=""
	I1002 20:17:30.004988   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:30.005290   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:30.504959   32280 type.go:168] "Request Body" body=""
	I1002 20:17:30.505027   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:30.505340   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:31.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:31.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:31.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:31.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:17:31.504956   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:31.505260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:32.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:32.004950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:32.005251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:32.005312   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:32.505895   32280 type.go:168] "Request Body" body=""
	I1002 20:17:32.505961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:32.506274   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:33.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:33.004958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:33.005280   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:33.504821   32280 type.go:168] "Request Body" body=""
	I1002 20:17:33.504892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:33.505232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:34.005931   32280 type.go:168] "Request Body" body=""
	I1002 20:17:34.006061   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:34.006376   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:34.006427   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:34.505046   32280 type.go:168] "Request Body" body=""
	I1002 20:17:34.505112   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:34.505397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:35.004981   32280 type.go:168] "Request Body" body=""
	I1002 20:17:35.005045   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:35.005370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:35.504929   32280 type.go:168] "Request Body" body=""
	I1002 20:17:35.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:35.505318   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:36.004980   32280 type.go:168] "Request Body" body=""
	I1002 20:17:36.005058   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:36.005394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:36.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:17:36.505060   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:36.505342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:36.505398   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:37.004903   32280 type.go:168] "Request Body" body=""
	I1002 20:17:37.004978   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:37.005282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:37.504878   32280 type.go:168] "Request Body" body=""
	I1002 20:17:37.504942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:37.505231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:38.005855   32280 type.go:168] "Request Body" body=""
	I1002 20:17:38.005918   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:38.006208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:38.505835   32280 type.go:168] "Request Body" body=""
	I1002 20:17:38.505904   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:38.506229   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:38.506296   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:39.004853   32280 type.go:168] "Request Body" body=""
	I1002 20:17:39.004944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:39.005263   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:39.505135   32280 type.go:168] "Request Body" body=""
	I1002 20:17:39.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:39.505615   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:40.005193   32280 type.go:168] "Request Body" body=""
	I1002 20:17:40.005282   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:40.005581   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:40.505135   32280 type.go:168] "Request Body" body=""
	I1002 20:17:40.505207   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:40.505537   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:41.005103   32280 type.go:168] "Request Body" body=""
	I1002 20:17:41.005165   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:41.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:41.005563   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:41.505063   32280 type.go:168] "Request Body" body=""
	I1002 20:17:41.505150   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:41.505490   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:42.005054   32280 type.go:168] "Request Body" body=""
	I1002 20:17:42.005160   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:42.005471   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:42.505019   32280 type.go:168] "Request Body" body=""
	I1002 20:17:42.505084   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:42.505402   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:43.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:43.005022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:43.005350   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:43.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:17:43.505007   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:43.505339   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:43.505393   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:44.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:17:44.005006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:44.005323   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:44.505096   32280 type.go:168] "Request Body" body=""
	I1002 20:17:44.505171   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:44.505478   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:45.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:17:45.005090   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:45.005399   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:45.504952   32280 type.go:168] "Request Body" body=""
	I1002 20:17:45.505012   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:45.505310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:46.004864   32280 type.go:168] "Request Body" body=""
	I1002 20:17:46.004951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:46.005294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:46.005355   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:46.504873   32280 type.go:168] "Request Body" body=""
	I1002 20:17:46.504940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:46.505244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:47.005848   32280 type.go:168] "Request Body" body=""
	I1002 20:17:47.005930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:47.006252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:47.504816   32280 type.go:168] "Request Body" body=""
	I1002 20:17:47.504905   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:47.505215   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:48.005846   32280 type.go:168] "Request Body" body=""
	I1002 20:17:48.005933   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:48.006242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:48.006300   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:48.505916   32280 type.go:168] "Request Body" body=""
	I1002 20:17:48.505980   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:48.506270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:49.004828   32280 type.go:168] "Request Body" body=""
	I1002 20:17:49.004910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:49.005240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:49.504935   32280 type.go:168] "Request Body" body=""
	I1002 20:17:49.505024   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:49.505373   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:50.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:50.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:50.005340   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:50.505078   32280 type.go:168] "Request Body" body=""
	I1002 20:17:50.505147   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:50.505479   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:50.505532   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:51.005024   32280 type.go:168] "Request Body" body=""
	I1002 20:17:51.005103   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:51.005420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:51.504998   32280 type.go:168] "Request Body" body=""
	I1002 20:17:51.505075   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:51.505410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:52.005000   32280 type.go:168] "Request Body" body=""
	I1002 20:17:52.005081   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:52.005428   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:52.505012   32280 type.go:168] "Request Body" body=""
	I1002 20:17:52.505100   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:52.505419   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:53.005015   32280 type.go:168] "Request Body" body=""
	I1002 20:17:53.005100   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:53.005438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:53.005495   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:53.504988   32280 type.go:168] "Request Body" body=""
	I1002 20:17:53.505059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:53.505385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:54.004971   32280 type.go:168] "Request Body" body=""
	I1002 20:17:54.005048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:54.005388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:54.505199   32280 type.go:168] "Request Body" body=""
	I1002 20:17:54.505286   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:54.505624   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:55.005199   32280 type.go:168] "Request Body" body=""
	I1002 20:17:55.005287   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:55.005639   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:55.005734   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:55.505238   32280 type.go:168] "Request Body" body=""
	I1002 20:17:55.505303   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:55.505621   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:56.005174   32280 type.go:168] "Request Body" body=""
	I1002 20:17:56.005258   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:56.005612   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:56.505166   32280 type.go:168] "Request Body" body=""
	I1002 20:17:56.505231   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:56.505523   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:57.005076   32280 type.go:168] "Request Body" body=""
	I1002 20:17:57.005156   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:57.005631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:57.505096   32280 type.go:168] "Request Body" body=""
	I1002 20:17:57.505167   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:57.505488   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:57.505554   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:58.005160   32280 type.go:168] "Request Body" body=""
	I1002 20:17:58.005227   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:58.005552   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:58.505084   32280 type.go:168] "Request Body" body=""
	I1002 20:17:58.505166   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:58.505512   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:59.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:17:59.005138   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:59.005430   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:59.505390   32280 type.go:168] "Request Body" body=""
	I1002 20:17:59.505459   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:59.505823   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:59.505890   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:00.005468   32280 type.go:168] "Request Body" body=""
	I1002 20:18:00.005540   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:00.005877   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:00.505768   32280 type.go:168] "Request Body" body=""
	I1002 20:18:00.505843   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:00.506177   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:01.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:18:01.005945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:01.006334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:01.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:18:01.504996   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:01.505321   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:02.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:18:02.005017   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:02.005334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:02.005385   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:02.504884   32280 type.go:168] "Request Body" body=""
	I1002 20:18:02.504963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:02.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:03.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:18:03.005005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:03.005356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:03.504932   32280 type.go:168] "Request Body" body=""
	I1002 20:18:03.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:03.505307   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:04.004878   32280 type.go:168] "Request Body" body=""
	I1002 20:18:04.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:04.005291   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:04.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:18:04.505131   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:04.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:04.505520   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:05.005008   32280 type.go:168] "Request Body" body=""
	I1002 20:18:05.005088   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:05.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:05.504977   32280 type.go:168] "Request Body" body=""
	I1002 20:18:05.505046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:05.505355   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:06.004890   32280 type.go:168] "Request Body" body=""
	I1002 20:18:06.004955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:06.005271   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:06.505878   32280 type.go:168] "Request Body" body=""
	I1002 20:18:06.505942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:06.506244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:06.506297   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:07.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:18:07.005943   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:07.006253   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:07.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:18:07.504964   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:07.505251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:08.004916   32280 type.go:168] "Request Body" body=""
	I1002 20:18:08.004981   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:08.005306   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:08.504856   32280 type.go:168] "Request Body" body=""
	I1002 20:18:08.504941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:08.505239   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:09.005880   32280 type.go:168] "Request Body" body=""
	I1002 20:18:09.005952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:09.006285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:09.006339   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:09.505080   32280 type.go:168] "Request Body" body=""
	I1002 20:18:09.505146   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:09.505447   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:10.005082   32280 type.go:168] "Request Body" body=""
	I1002 20:18:10.005147   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:10.005473   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:10.505242   32280 type.go:168] "Request Body" body=""
	I1002 20:18:10.505307   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:10.505606   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:11.005169   32280 type.go:168] "Request Body" body=""
	I1002 20:18:11.005243   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:11.005570   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:11.505121   32280 type.go:168] "Request Body" body=""
	I1002 20:18:11.505186   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:11.505487   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:11.505538   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:12.005071   32280 type.go:168] "Request Body" body=""
	I1002 20:18:12.005141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:12.005461   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:12.505817   32280 type.go:168] "Request Body" body=""
	I1002 20:18:12.505883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:12.506177   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:13.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:18:13.005887   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:13.006211   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:13.505873   32280 type.go:168] "Request Body" body=""
	I1002 20:18:13.505942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:13.506236   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:13.506287   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:14.004813   32280 type.go:168] "Request Body" body=""
	I1002 20:18:14.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:14.005208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:14.505838   32280 type.go:168] "Request Body" body=""
	I1002 20:18:14.505912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:14.506225   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:15.005871   32280 type.go:168] "Request Body" body=""
	I1002 20:18:15.005949   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:15.006278   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:15.504830   32280 type.go:168] "Request Body" body=""
	I1002 20:18:15.504900   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:15.505190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:16.004845   32280 type.go:168] "Request Body" body=""
	I1002 20:18:16.004935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:16.005267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:16.005321   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:16.504844   32280 type.go:168] "Request Body" body=""
	I1002 20:18:16.504915   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:16.505208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:17.004848   32280 type.go:168] "Request Body" body=""
	I1002 20:18:17.005199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:17.005523   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:17.505033   32280 type.go:168] "Request Body" body=""
	I1002 20:18:17.505107   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:17.505434   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:18.004982   32280 type.go:168] "Request Body" body=""
	I1002 20:18:18.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:18.005443   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:18.005498   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:18.505161   32280 type.go:168] "Request Body" body=""
	I1002 20:18:18.505228   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:18.505530   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:19.005238   32280 type.go:168] "Request Body" body=""
	I1002 20:18:19.005302   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:19.005626   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:19.505401   32280 type.go:168] "Request Body" body=""
	I1002 20:18:19.505466   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:19.505798   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:20.005591   32280 type.go:168] "Request Body" body=""
	I1002 20:18:20.005673   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:20.006000   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:20.006051   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:20.505823   32280 type.go:168] "Request Body" body=""
	I1002 20:18:20.505886   32280 node_ready.go:38] duration metric: took 6m0.001160736s for node "functional-753218" to be "Ready" ...
	I1002 20:18:20.508034   32280 out.go:203] 
	W1002 20:18:20.509328   32280 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:18:20.509341   32280 out.go:285] * 
	W1002 20:18:20.511008   32280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:18:20.512144   32280 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.321103858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.321491573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.322911304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.323405869Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.337779996Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=95f1ede9-941f-4144-88f8-a404282372bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.338388539Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7925e8f1-8190-46db-ba21-210ddaa8dfad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339261167Z" level=info msg="createCtr: deleting container ID 01a007256b26260bbda1a485ac64ac3c89901e23abb5a27a0f834cf970bbb39d from idIndex" id=95f1ede9-941f-4144-88f8-a404282372bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339292438Z" level=info msg="createCtr: removing container 01a007256b26260bbda1a485ac64ac3c89901e23abb5a27a0f834cf970bbb39d" id=95f1ede9-941f-4144-88f8-a404282372bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339320859Z" level=info msg="createCtr: deleting container 01a007256b26260bbda1a485ac64ac3c89901e23abb5a27a0f834cf970bbb39d from storage" id=95f1ede9-941f-4144-88f8-a404282372bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339813064Z" level=info msg="createCtr: deleting container ID 3de785e23a0e2a9b688fd47d95f0e222abadb184c86c16208d5674e3ecc87423 from idIndex" id=7925e8f1-8190-46db-ba21-210ddaa8dfad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339845489Z" level=info msg="createCtr: removing container 3de785e23a0e2a9b688fd47d95f0e222abadb184c86c16208d5674e3ecc87423" id=7925e8f1-8190-46db-ba21-210ddaa8dfad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.339873272Z" level=info msg="createCtr: deleting container 3de785e23a0e2a9b688fd47d95f0e222abadb184c86c16208d5674e3ecc87423 from storage" id=7925e8f1-8190-46db-ba21-210ddaa8dfad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.34258012Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_8f4d4ea1035e2535a9c472062bfdd7f7_0" id=95f1ede9-941f-4144-88f8-a404282372bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:17 functional-753218 crio[2940]: time="2025-10-02T20:18:17.342938703Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753218_kube-system_b25a71e49a335bbe853872de1b1e3093_0" id=7925e8f1-8190-46db-ba21-210ddaa8dfad name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.313913897Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=d8ae3c90-d4ca-4ad6-873d-1994584e161b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.314705476Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=7cd2779d-b010-4b8c-9573-780ae7bedbb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.315549863Z" level=info msg="Creating container: kube-system/etcd-functional-753218/etcd" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.315792146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.319001536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.319368187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.33516141Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.336600751Z" level=info msg="createCtr: deleting container ID f5ea9f185f346f3d3e3da1f5f6186ca0c1fd2f6c58678ae2aa18ebfc909aba4b from idIndex" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.336636077Z" level=info msg="createCtr: removing container f5ea9f185f346f3d3e3da1f5f6186ca0c1fd2f6c58678ae2aa18ebfc909aba4b" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.336695824Z" level=info msg="createCtr: deleting container f5ea9f185f346f3d3e3da1f5f6186ca0c1fd2f6c58678ae2aa18ebfc909aba4b from storage" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:18 functional-753218 crio[2940]: time="2025-10-02T20:18:18.33870937Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753218_kube-system_91f2b96cb3e3f380ded17f30e8d873bd_0" id=f7613c0c-6e95-41c4-a5fd-ef3dce797596 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:18:24.155777    4493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:24.156302    4493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:24.157842    4493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:24.158247    4493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:24.159749    4493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:18:24 up  1:00,  0 user,  load average: 0.36, 0.12, 0.08
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:18:17 functional-753218 kubelet[1799]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(8f4d4ea1035e2535a9c472062bfdd7f7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:17 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:17 functional-753218 kubelet[1799]: E1002 20:18:17.342989    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="8f4d4ea1035e2535a9c472062bfdd7f7"
	Oct 02 20:18:17 functional-753218 kubelet[1799]: E1002 20:18:17.343140    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:18:17 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:17 functional-753218 kubelet[1799]:  > podSandboxID="de1cc60186f989d4e0a8994c95a3f2e5173970c97e595ad7db2d469e1551df14"
	Oct 02 20:18:17 functional-753218 kubelet[1799]: E1002 20:18:17.343209    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:17 functional-753218 kubelet[1799]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:17 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:17 functional-753218 kubelet[1799]: E1002 20:18:17.344344    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	Oct 02 20:18:18 functional-753218 kubelet[1799]: E1002 20:18:18.019553    1799 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-753218.186ac570b511d2a5\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac570b511d2a5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753218 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:08:12.306453157 +0000 UTC m=+0.389048074,LastTimestamp:2025-10-02 20:08:12.30766191 +0000 UTC m=+0.390256814,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingI
nstance:functional-753218,}"
	Oct 02 20:18:18 functional-753218 kubelet[1799]: E1002 20:18:18.313518    1799 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:18:18 functional-753218 kubelet[1799]: E1002 20:18:18.338994    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:18:18 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:18 functional-753218 kubelet[1799]:  > podSandboxID="65675f5fefd97e29be9e11728def45d5a2c472bac18f3ca682b57fda50e5abf7"
	Oct 02 20:18:18 functional-753218 kubelet[1799]: E1002 20:18:18.339099    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:18 functional-753218 kubelet[1799]:         container etcd start failed in pod etcd-functional-753218_kube-system(91f2b96cb3e3f380ded17f30e8d873bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:18 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:18 functional-753218 kubelet[1799]: E1002 20:18:18.339137    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753218" podUID="91f2b96cb3e3f380ded17f30e8d873bd"
	Oct 02 20:18:19 functional-753218 kubelet[1799]: E1002 20:18:19.371428    1799 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 20:18:19 functional-753218 kubelet[1799]: E1002 20:18:19.624437    1799 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 20:18:19 functional-753218 kubelet[1799]: E1002 20:18:19.986926    1799 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:18:20 functional-753218 kubelet[1799]: I1002 20:18:20.185014    1799 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:18:20 functional-753218 kubelet[1799]: E1002 20:18:20.185332    1799 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:18:22 functional-753218 kubelet[1799]: E1002 20:18:22.352492    1799 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753218\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (290.632339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (1.97s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 kubectl -- --context functional-753218 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 kubectl -- --context functional-753218 get pods: exit status 1 (91.28383ms)

                                                
                                                
** stderr ** 
	E1002 20:18:30.245220   38121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:30.245546   38121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:30.246964   38121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:30.247208   38121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:30.248489   38121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-753218 kubectl -- --context functional-753218 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (281.074492ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                              │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ delete  │ -p nospam-547008                                                                                              │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ start   │ -p functional-753218 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ -p functional-753218 --alsologtostderr -v=8                                                                   │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │                     │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:3.1                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:3.3                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:latest                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add minikube-local-cache-test:functional-753218                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache delete minikube-local-cache-test:functional-753218                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl images                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ cache   │ functional-753218 cache reload                                                                                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ kubectl │ functional-753218 kubectl -- --context functional-753218 get pods                                             │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:12:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:12:14.161053   32280 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:12:14.161314   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:12:14.161324   32280 out.go:374] Setting ErrFile to fd 2...
	I1002 20:12:14.161329   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:12:14.161525   32280 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:12:14.161965   32280 out.go:368] Setting JSON to false
	I1002 20:12:14.162918   32280 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3283,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:12:14.163001   32280 start.go:140] virtualization: kvm guest
	I1002 20:12:14.165258   32280 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:12:14.166596   32280 notify.go:221] Checking for updates...
	I1002 20:12:14.166661   32280 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:12:14.168151   32280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:12:14.169781   32280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:14.170964   32280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:12:14.172159   32280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:12:14.173393   32280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:12:14.175005   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:14.175089   32280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:12:14.198042   32280 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:12:14.198110   32280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:12:14.249812   32280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:12:14.240278836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:12:14.249943   32280 docker.go:319] overlay module found
	I1002 20:12:14.251744   32280 out.go:179] * Using the docker driver based on existing profile
	I1002 20:12:14.252771   32280 start.go:306] selected driver: docker
	I1002 20:12:14.252788   32280 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:14.252894   32280 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:12:14.253012   32280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:12:14.302717   32280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:12:14.29341416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:12:14.303277   32280 cni.go:84] Creating CNI manager for ""
	I1002 20:12:14.303332   32280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:12:14.303374   32280 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:14.305248   32280 out.go:179] * Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	I1002 20:12:14.306703   32280 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:12:14.308110   32280 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:12:14.309231   32280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:12:14.309270   32280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:12:14.309292   32280 cache.go:59] Caching tarball of preloaded images
	I1002 20:12:14.309321   32280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:12:14.309392   32280 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:12:14.309404   32280 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:12:14.309506   32280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:12:14.328595   32280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:12:14.328612   32280 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:12:14.328641   32280 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:12:14.328685   32280 start.go:361] acquireMachinesLock for functional-753218: {Name:mk742badf6f1dbafca8397e398143e758831ae3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:12:14.328749   32280 start.go:365] duration metric: took 40.346µs to acquireMachinesLock for "functional-753218"
	I1002 20:12:14.328768   32280 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:12:14.328773   32280 fix.go:55] fixHost starting: 
	I1002 20:12:14.328978   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:14.345315   32280 fix.go:113] recreateIfNeeded on functional-753218: state=Running err=<nil>
	W1002 20:12:14.345339   32280 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:12:14.347103   32280 out.go:252] * Updating the running docker "functional-753218" container ...
	I1002 20:12:14.347127   32280 machine.go:93] provisionDockerMachine start ...
	I1002 20:12:14.347175   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.364778   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.365056   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.365071   32280 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:12:14.506481   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:12:14.506514   32280 ubuntu.go:182] provisioning hostname "functional-753218"
	I1002 20:12:14.506576   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.523646   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.523886   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.523904   32280 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753218 && echo "functional-753218" | sudo tee /etc/hostname
	I1002 20:12:14.674327   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:12:14.674412   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.691957   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.692191   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.692210   32280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753218/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:12:14.834109   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:12:14.834144   32280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:12:14.834205   32280 ubuntu.go:190] setting up certificates
	I1002 20:12:14.834219   32280 provision.go:84] configureAuth start
	I1002 20:12:14.834287   32280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:12:14.852021   32280 provision.go:143] copyHostCerts
	I1002 20:12:14.852056   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:12:14.852091   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:12:14.852111   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:12:14.852184   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:12:14.852289   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:12:14.852315   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:12:14.852322   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:12:14.852367   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:12:14.852431   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:12:14.852454   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:12:14.852460   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:12:14.852497   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:12:14.852565   32280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.functional-753218 san=[127.0.0.1 192.168.49.2 functional-753218 localhost minikube]
	I1002 20:12:14.908205   32280 provision.go:177] copyRemoteCerts
	I1002 20:12:14.908265   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:12:14.908316   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.925225   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.025356   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:12:15.025415   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:12:15.042012   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:12:15.042068   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:12:15.059080   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:12:15.059140   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:12:15.075501   32280 provision.go:87] duration metric: took 241.264617ms to configureAuth
	I1002 20:12:15.075530   32280 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:12:15.075723   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:15.075835   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.092499   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:15.092718   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:15.092740   32280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:12:15.350871   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:12:15.350899   32280 machine.go:96] duration metric: took 1.003764785s to provisionDockerMachine
	I1002 20:12:15.350913   32280 start.go:294] postStartSetup for "functional-753218" (driver="docker")
	I1002 20:12:15.350926   32280 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:12:15.350976   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:12:15.351010   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.368192   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.468976   32280 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:12:15.472512   32280 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 20:12:15.472527   32280 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 20:12:15.472540   32280 command_runner.go:130] > VERSION_ID="12"
	I1002 20:12:15.472545   32280 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 20:12:15.472553   32280 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 20:12:15.472556   32280 command_runner.go:130] > ID=debian
	I1002 20:12:15.472560   32280 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 20:12:15.472565   32280 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 20:12:15.472572   32280 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 20:12:15.472618   32280 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:12:15.472635   32280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:12:15.472666   32280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:12:15.472731   32280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:12:15.472806   32280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:12:15.472815   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:12:15.472889   32280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> hosts in /etc/test/nested/copy/12851
	I1002 20:12:15.472896   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> /etc/test/nested/copy/12851/hosts
	I1002 20:12:15.472925   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12851
	I1002 20:12:15.480384   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:12:15.496865   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts --> /etc/test/nested/copy/12851/hosts (40 bytes)
	I1002 20:12:15.513635   32280 start.go:297] duration metric: took 162.708522ms for postStartSetup
	I1002 20:12:15.513745   32280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:12:15.513794   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.530644   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.628445   32280 command_runner.go:130] > 39%
	I1002 20:12:15.628745   32280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:12:15.633076   32280 command_runner.go:130] > 179G
	I1002 20:12:15.633306   32280 fix.go:57] duration metric: took 1.304525715s for fixHost
	I1002 20:12:15.633325   32280 start.go:84] releasing machines lock for "functional-753218", held for 1.30456494s
	I1002 20:12:15.633398   32280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:12:15.650579   32280 ssh_runner.go:195] Run: cat /version.json
	I1002 20:12:15.650618   32280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:12:15.650631   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.650688   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.668938   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.669107   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.765770   32280 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 20:12:15.817112   32280 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 20:12:15.819166   32280 ssh_runner.go:195] Run: systemctl --version
	I1002 20:12:15.825335   32280 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 20:12:15.825364   32280 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 20:12:15.825559   32280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:12:15.861701   32280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 20:12:15.866192   32280 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 20:12:15.866262   32280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:12:15.866323   32280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:12:15.874084   32280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:12:15.874106   32280 start.go:496] detecting cgroup driver to use...
	I1002 20:12:15.874141   32280 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:12:15.874206   32280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:12:15.887803   32280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:12:15.899530   32280 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:12:15.899588   32280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:12:15.913378   32280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:12:15.925494   32280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:12:16.013036   32280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:12:16.099049   32280 docker.go:234] disabling docker service ...
	I1002 20:12:16.099135   32280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:12:16.112698   32280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:12:16.124592   32280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:12:16.212924   32280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:12:16.298302   32280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:12:16.310529   32280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:12:16.324186   32280 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 20:12:16.324212   32280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:12:16.324248   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.332999   32280 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:12:16.333067   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.341758   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.350162   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.358406   32280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:12:16.365887   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.374465   32280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.382513   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.390861   32280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:12:16.397800   32280 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 20:12:16.397864   32280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:12:16.404831   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:16.487603   32280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:12:19.404809   32280 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.917172928s)
	I1002 20:12:19.404840   32280 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:12:19.404889   32280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:12:19.408896   32280 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 20:12:19.408924   32280 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 20:12:19.408935   32280 command_runner.go:130] > Device: 0,59	Inode: 3843        Links: 1
	I1002 20:12:19.408947   32280 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:12:19.408956   32280 command_runner.go:130] > Access: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408964   32280 command_runner.go:130] > Modify: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408977   32280 command_runner.go:130] > Change: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408989   32280 command_runner.go:130] >  Birth: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.409044   32280 start.go:564] Will wait 60s for crictl version
	I1002 20:12:19.409092   32280 ssh_runner.go:195] Run: which crictl
	I1002 20:12:19.412689   32280 command_runner.go:130] > /usr/local/bin/crictl
	I1002 20:12:19.412744   32280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:12:19.436957   32280 command_runner.go:130] > Version:  0.1.0
	I1002 20:12:19.436979   32280 command_runner.go:130] > RuntimeName:  cri-o
	I1002 20:12:19.436984   32280 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 20:12:19.436989   32280 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 20:12:19.437005   32280 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:12:19.437072   32280 ssh_runner.go:195] Run: crio --version
	I1002 20:12:19.464211   32280 command_runner.go:130] > crio version 1.34.1
	I1002 20:12:19.464228   32280 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:12:19.464234   32280 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:12:19.464240   32280 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:12:19.464244   32280 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:12:19.464248   32280 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:12:19.464252   32280 command_runner.go:130] >    Compiler:       gc
	I1002 20:12:19.464257   32280 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:12:19.464261   32280 command_runner.go:130] >    Linkmode:       static
	I1002 20:12:19.464264   32280 command_runner.go:130] >    BuildTags:
	I1002 20:12:19.464267   32280 command_runner.go:130] >      static
	I1002 20:12:19.464275   32280 command_runner.go:130] >      netgo
	I1002 20:12:19.464279   32280 command_runner.go:130] >      osusergo
	I1002 20:12:19.464283   32280 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:12:19.464288   32280 command_runner.go:130] >      seccomp
	I1002 20:12:19.464291   32280 command_runner.go:130] >      apparmor
	I1002 20:12:19.464298   32280 command_runner.go:130] >      selinux
	I1002 20:12:19.464302   32280 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:12:19.464306   32280 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:12:19.464310   32280 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:12:19.464385   32280 ssh_runner.go:195] Run: crio --version
	I1002 20:12:19.491564   32280 command_runner.go:130] > crio version 1.34.1
	I1002 20:12:19.491590   32280 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:12:19.491596   32280 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:12:19.491601   32280 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:12:19.491605   32280 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:12:19.491609   32280 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:12:19.491612   32280 command_runner.go:130] >    Compiler:       gc
	I1002 20:12:19.491619   32280 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:12:19.491623   32280 command_runner.go:130] >    Linkmode:       static
	I1002 20:12:19.491627   32280 command_runner.go:130] >    BuildTags:
	I1002 20:12:19.491630   32280 command_runner.go:130] >      static
	I1002 20:12:19.491634   32280 command_runner.go:130] >      netgo
	I1002 20:12:19.491637   32280 command_runner.go:130] >      osusergo
	I1002 20:12:19.491641   32280 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:12:19.491665   32280 command_runner.go:130] >      seccomp
	I1002 20:12:19.491671   32280 command_runner.go:130] >      apparmor
	I1002 20:12:19.491681   32280 command_runner.go:130] >      selinux
	I1002 20:12:19.491687   32280 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:12:19.491700   32280 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:12:19.491719   32280 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:12:19.493718   32280 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:12:19.495253   32280 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:12:19.512253   32280 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:12:19.516262   32280 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 20:12:19.516341   32280 kubeadm.go:883] updating cluster {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:12:19.516485   32280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:12:19.516543   32280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:12:19.546693   32280 command_runner.go:130] > {
	I1002 20:12:19.546715   32280 command_runner.go:130] >   "images":  [
	I1002 20:12:19.546721   32280 command_runner.go:130] >     {
	I1002 20:12:19.546728   32280 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:12:19.546732   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.546739   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:12:19.546745   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546774   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.546794   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:12:19.546808   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:12:19.546815   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546819   32280 command_runner.go:130] >       "size":  "109379124",
	I1002 20:12:19.546826   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.546835   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.546843   32280 command_runner.go:130] >     },
	I1002 20:12:19.546850   32280 command_runner.go:130] >     {
	I1002 20:12:19.546862   32280 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:12:19.546873   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.546881   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:12:19.546890   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546896   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.546909   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:12:19.546920   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:12:19.546937   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546947   32280 command_runner.go:130] >       "size":  "31470524",
	I1002 20:12:19.546954   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.546966   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.546972   32280 command_runner.go:130] >     },
	I1002 20:12:19.546979   32280 command_runner.go:130] >     {
	I1002 20:12:19.546989   32280 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:12:19.547010   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547022   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:12:19.547032   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547039   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547053   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:12:19.547065   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:12:19.547073   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547080   32280 command_runner.go:130] >       "size":  "76103547",
	I1002 20:12:19.547087   32280 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:12:19.547091   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547094   32280 command_runner.go:130] >     },
	I1002 20:12:19.547100   32280 command_runner.go:130] >     {
	I1002 20:12:19.547113   32280 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:12:19.547119   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547129   32280 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:12:19.547135   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547144   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547154   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:12:19.547167   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:12:19.547176   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547182   32280 command_runner.go:130] >       "size":  "195976448",
	I1002 20:12:19.547187   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547192   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547201   32280 command_runner.go:130] >       },
	I1002 20:12:19.547217   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547228   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547233   32280 command_runner.go:130] >     },
	I1002 20:12:19.547242   32280 command_runner.go:130] >     {
	I1002 20:12:19.547252   32280 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:12:19.547261   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547269   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:12:19.547276   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547281   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547301   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:12:19.547316   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:12:19.547321   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547331   32280 command_runner.go:130] >       "size":  "89046001",
	I1002 20:12:19.547337   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547346   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547352   32280 command_runner.go:130] >       },
	I1002 20:12:19.547361   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547368   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547376   32280 command_runner.go:130] >     },
	I1002 20:12:19.547380   32280 command_runner.go:130] >     {
	I1002 20:12:19.547390   32280 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:12:19.547396   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547407   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:12:19.547413   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547423   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547435   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:12:19.547451   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:12:19.547459   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547466   32280 command_runner.go:130] >       "size":  "76004181",
	I1002 20:12:19.547474   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547480   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547489   32280 command_runner.go:130] >       },
	I1002 20:12:19.547495   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547507   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547512   32280 command_runner.go:130] >     },
	I1002 20:12:19.547517   32280 command_runner.go:130] >     {
	I1002 20:12:19.547527   32280 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:12:19.547534   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547541   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:12:19.547546   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547552   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547561   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:12:19.547582   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:12:19.547592   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547599   32280 command_runner.go:130] >       "size":  "73138073",
	I1002 20:12:19.547606   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547615   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547624   32280 command_runner.go:130] >     },
	I1002 20:12:19.547629   32280 command_runner.go:130] >     {
	I1002 20:12:19.547641   32280 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:12:19.547658   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547667   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:12:19.547673   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547683   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547693   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:12:19.547720   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:12:19.547729   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547733   32280 command_runner.go:130] >       "size":  "53844823",
	I1002 20:12:19.547737   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547743   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547752   32280 command_runner.go:130] >       },
	I1002 20:12:19.547758   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547768   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547775   32280 command_runner.go:130] >     },
	I1002 20:12:19.547782   32280 command_runner.go:130] >     {
	I1002 20:12:19.547794   32280 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:12:19.547804   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547814   32280 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.547820   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547825   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547839   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:12:19.547853   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:12:19.547861   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547867   32280 command_runner.go:130] >       "size":  "742092",
	I1002 20:12:19.547876   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547887   32280 command_runner.go:130] >         "value":  "65535"
	I1002 20:12:19.547894   32280 command_runner.go:130] >       },
	I1002 20:12:19.547900   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547906   32280 command_runner.go:130] >       "pinned":  true
	I1002 20:12:19.547910   32280 command_runner.go:130] >     }
	I1002 20:12:19.547917   32280 command_runner.go:130] >   ]
	I1002 20:12:19.547924   32280 command_runner.go:130] > }
	I1002 20:12:19.548472   32280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:12:19.548485   32280 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:12:19.548524   32280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:12:19.570809   32280 command_runner.go:130] > {
	I1002 20:12:19.570828   32280 command_runner.go:130] >   "images":  [
	I1002 20:12:19.570831   32280 command_runner.go:130] >     {
	I1002 20:12:19.570839   32280 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:12:19.570844   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.570849   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:12:19.570853   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570857   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.570864   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:12:19.570871   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:12:19.570877   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570882   32280 command_runner.go:130] >       "size":  "109379124",
	I1002 20:12:19.570889   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.570902   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.570908   32280 command_runner.go:130] >     },
	I1002 20:12:19.570914   32280 command_runner.go:130] >     {
	I1002 20:12:19.570922   32280 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:12:19.570928   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.570932   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:12:19.570938   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570941   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.570948   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:12:19.570958   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:12:19.570964   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570971   32280 command_runner.go:130] >       "size":  "31470524",
	I1002 20:12:19.570976   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.570985   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.570990   32280 command_runner.go:130] >     },
	I1002 20:12:19.570993   32280 command_runner.go:130] >     {
	I1002 20:12:19.571001   32280 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:12:19.571005   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571012   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:12:19.571016   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571021   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571028   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:12:19.571037   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:12:19.571043   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571047   32280 command_runner.go:130] >       "size":  "76103547",
	I1002 20:12:19.571050   32280 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:12:19.571056   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571059   32280 command_runner.go:130] >     },
	I1002 20:12:19.571065   32280 command_runner.go:130] >     {
	I1002 20:12:19.571071   32280 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:12:19.571077   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571081   32280 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:12:19.571087   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571091   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571099   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:12:19.571108   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:12:19.571113   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571117   32280 command_runner.go:130] >       "size":  "195976448",
	I1002 20:12:19.571122   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571126   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571132   32280 command_runner.go:130] >       },
	I1002 20:12:19.571139   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571145   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571152   32280 command_runner.go:130] >     },
	I1002 20:12:19.571157   32280 command_runner.go:130] >     {
	I1002 20:12:19.571163   32280 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:12:19.571169   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571173   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:12:19.571179   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571183   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571192   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:12:19.571201   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:12:19.571207   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571211   32280 command_runner.go:130] >       "size":  "89046001",
	I1002 20:12:19.571216   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571220   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571226   32280 command_runner.go:130] >       },
	I1002 20:12:19.571231   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571234   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571237   32280 command_runner.go:130] >     },
	I1002 20:12:19.571242   32280 command_runner.go:130] >     {
	I1002 20:12:19.571249   32280 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:12:19.571255   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571260   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:12:19.571265   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571269   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571276   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:12:19.571286   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:12:19.571292   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571295   32280 command_runner.go:130] >       "size":  "76004181",
	I1002 20:12:19.571301   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571305   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571310   32280 command_runner.go:130] >       },
	I1002 20:12:19.571314   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571318   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571323   32280 command_runner.go:130] >     },
	I1002 20:12:19.571327   32280 command_runner.go:130] >     {
	I1002 20:12:19.571335   32280 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:12:19.571339   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571349   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:12:19.571355   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571359   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571367   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:12:19.571376   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:12:19.571382   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571386   32280 command_runner.go:130] >       "size":  "73138073",
	I1002 20:12:19.571393   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571397   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571402   32280 command_runner.go:130] >     },
	I1002 20:12:19.571405   32280 command_runner.go:130] >     {
	I1002 20:12:19.571410   32280 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:12:19.571414   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571418   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:12:19.571422   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571425   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571431   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:12:19.571446   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:12:19.571455   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571461   32280 command_runner.go:130] >       "size":  "53844823",
	I1002 20:12:19.571469   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571474   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571482   32280 command_runner.go:130] >       },
	I1002 20:12:19.571488   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571495   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571498   32280 command_runner.go:130] >     },
	I1002 20:12:19.571504   32280 command_runner.go:130] >     {
	I1002 20:12:19.571510   32280 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:12:19.571516   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571520   32280 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.571526   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571530   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571542   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:12:19.571552   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:12:19.571556   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571562   32280 command_runner.go:130] >       "size":  "742092",
	I1002 20:12:19.571565   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571571   32280 command_runner.go:130] >         "value":  "65535"
	I1002 20:12:19.571575   32280 command_runner.go:130] >       },
	I1002 20:12:19.571581   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571585   32280 command_runner.go:130] >       "pinned":  true
	I1002 20:12:19.571590   32280 command_runner.go:130] >     }
	I1002 20:12:19.571593   32280 command_runner.go:130] >   ]
	I1002 20:12:19.571598   32280 command_runner.go:130] > }
	I1002 20:12:19.572597   32280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:12:19.572614   32280 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:12:19.572621   32280 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:12:19.572734   32280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:12:19.572796   32280 ssh_runner.go:195] Run: crio config
	I1002 20:12:19.612615   32280 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 20:12:19.612638   32280 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 20:12:19.612664   32280 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 20:12:19.612669   32280 command_runner.go:130] > #
	I1002 20:12:19.612689   32280 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 20:12:19.612698   32280 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 20:12:19.612709   32280 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 20:12:19.612721   32280 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 20:12:19.612728   32280 command_runner.go:130] > # reload'.
	I1002 20:12:19.612738   32280 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 20:12:19.612748   32280 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 20:12:19.612758   32280 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 20:12:19.612768   32280 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 20:12:19.612773   32280 command_runner.go:130] > [crio]
	I1002 20:12:19.612785   32280 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 20:12:19.612796   32280 command_runner.go:130] > # containers images, in this directory.
	I1002 20:12:19.612808   32280 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 20:12:19.612821   32280 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 20:12:19.612828   32280 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 20:12:19.612841   32280 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 20:12:19.612855   32280 command_runner.go:130] > # imagestore = ""
	I1002 20:12:19.612864   32280 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 20:12:19.612878   32280 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 20:12:19.612885   32280 command_runner.go:130] > # storage_driver = "overlay"
	I1002 20:12:19.612895   32280 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 20:12:19.612905   32280 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 20:12:19.612914   32280 command_runner.go:130] > # storage_option = [
	I1002 20:12:19.612917   32280 command_runner.go:130] > # ]
	I1002 20:12:19.612923   32280 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 20:12:19.612931   32280 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 20:12:19.612941   32280 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 20:12:19.612950   32280 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 20:12:19.612959   32280 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 20:12:19.612970   32280 command_runner.go:130] > # always happen on a node reboot
	I1002 20:12:19.612977   32280 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 20:12:19.612994   32280 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 20:12:19.613004   32280 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 20:12:19.613009   32280 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 20:12:19.613016   32280 command_runner.go:130] > # version_file_persist = ""
	I1002 20:12:19.613025   32280 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 20:12:19.613033   32280 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 20:12:19.613041   32280 command_runner.go:130] > # internal_wipe = true
	I1002 20:12:19.613054   32280 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 20:12:19.613066   32280 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 20:12:19.613075   32280 command_runner.go:130] > # internal_repair = true
	I1002 20:12:19.613083   32280 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 20:12:19.613095   32280 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 20:12:19.613113   32280 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 20:12:19.613120   32280 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 20:12:19.613129   32280 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 20:12:19.613134   32280 command_runner.go:130] > [crio.api]
	I1002 20:12:19.613142   32280 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 20:12:19.613150   32280 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 20:12:19.613162   32280 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 20:12:19.613173   32280 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 20:12:19.613185   32280 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 20:12:19.613197   32280 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 20:12:19.613204   32280 command_runner.go:130] > # stream_port = "0"
	I1002 20:12:19.613213   32280 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 20:12:19.613222   32280 command_runner.go:130] > # stream_enable_tls = false
	I1002 20:12:19.613231   32280 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 20:12:19.613238   32280 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 20:12:19.613248   32280 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 20:12:19.613260   32280 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 20:12:19.613266   32280 command_runner.go:130] > # stream_tls_cert = ""
	I1002 20:12:19.613274   32280 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 20:12:19.613292   32280 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 20:12:19.613301   32280 command_runner.go:130] > # stream_tls_key = ""
	I1002 20:12:19.613309   32280 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 20:12:19.613323   32280 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 20:12:19.613331   32280 command_runner.go:130] > # automatically pick up the changes.
	I1002 20:12:19.613340   32280 command_runner.go:130] > # stream_tls_ca = ""
	I1002 20:12:19.613394   32280 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:12:19.613408   32280 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 20:12:19.613420   32280 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:12:19.613430   32280 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 20:12:19.613440   32280 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 20:12:19.613452   32280 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 20:12:19.613458   32280 command_runner.go:130] > [crio.runtime]
	I1002 20:12:19.613469   32280 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 20:12:19.613481   32280 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 20:12:19.613487   32280 command_runner.go:130] > # "nofile=1024:2048"
	I1002 20:12:19.613500   32280 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 20:12:19.613508   32280 command_runner.go:130] > # default_ulimits = [
	I1002 20:12:19.613514   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613526   32280 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 20:12:19.613532   32280 command_runner.go:130] > # no_pivot = false
	I1002 20:12:19.613543   32280 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 20:12:19.613554   32280 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 20:12:19.613564   32280 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 20:12:19.613573   32280 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 20:12:19.613584   32280 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 20:12:19.613594   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:12:19.613603   32280 command_runner.go:130] > # conmon = ""
	I1002 20:12:19.613611   32280 command_runner.go:130] > # Cgroup setting for conmon
	I1002 20:12:19.613625   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 20:12:19.613632   32280 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 20:12:19.613642   32280 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 20:12:19.613664   32280 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 20:12:19.613682   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:12:19.613692   32280 command_runner.go:130] > # conmon_env = [
	I1002 20:12:19.613698   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613710   32280 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 20:12:19.613720   32280 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 20:12:19.613729   32280 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 20:12:19.613739   32280 command_runner.go:130] > # default_env = [
	I1002 20:12:19.613746   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613758   32280 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 20:12:19.613769   32280 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 20:12:19.613778   32280 command_runner.go:130] > # selinux = false
	I1002 20:12:19.613788   32280 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 20:12:19.613803   32280 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 20:12:19.613814   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613822   32280 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:12:19.613835   32280 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 20:12:19.613846   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613852   32280 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 20:12:19.613865   32280 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 20:12:19.613878   32280 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 20:12:19.613890   32280 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 20:12:19.613899   32280 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 20:12:19.613908   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613917   32280 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 20:12:19.613926   32280 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 20:12:19.613937   32280 command_runner.go:130] > # the cgroup blockio controller.
	I1002 20:12:19.613944   32280 command_runner.go:130] > # blockio_config_file = ""
	I1002 20:12:19.613958   32280 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 20:12:19.613965   32280 command_runner.go:130] > # blockio parameters.
	I1002 20:12:19.613974   32280 command_runner.go:130] > # blockio_reload = false
	I1002 20:12:19.613983   32280 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 20:12:19.613994   32280 command_runner.go:130] > # irqbalance daemon.
	I1002 20:12:19.614002   32280 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 20:12:19.614013   32280 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 20:12:19.614023   32280 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 20:12:19.614037   32280 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 20:12:19.614048   32280 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 20:12:19.614061   32280 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 20:12:19.614068   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.614077   32280 command_runner.go:130] > # rdt_config_file = ""
	I1002 20:12:19.614085   32280 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 20:12:19.614095   32280 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 20:12:19.614104   32280 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 20:12:19.614113   32280 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 20:12:19.614127   32280 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 20:12:19.614139   32280 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 20:12:19.614147   32280 command_runner.go:130] > # will be added.
	I1002 20:12:19.614155   32280 command_runner.go:130] > # default_capabilities = [
	I1002 20:12:19.614163   32280 command_runner.go:130] > # 	"CHOWN",
	I1002 20:12:19.614170   32280 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 20:12:19.614177   32280 command_runner.go:130] > # 	"FSETID",
	I1002 20:12:19.614182   32280 command_runner.go:130] > # 	"FOWNER",
	I1002 20:12:19.614187   32280 command_runner.go:130] > # 	"SETGID",
	I1002 20:12:19.614210   32280 command_runner.go:130] > # 	"SETUID",
	I1002 20:12:19.614214   32280 command_runner.go:130] > # 	"SETPCAP",
	I1002 20:12:19.614219   32280 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 20:12:19.614223   32280 command_runner.go:130] > # 	"KILL",
	I1002 20:12:19.614227   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614236   32280 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 20:12:19.614243   32280 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 20:12:19.614248   32280 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 20:12:19.614256   32280 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 20:12:19.614265   32280 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:12:19.614271   32280 command_runner.go:130] > default_sysctls = [
	I1002 20:12:19.614279   32280 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 20:12:19.614284   32280 command_runner.go:130] > ]
	I1002 20:12:19.614291   32280 command_runner.go:130] > # List of devices on the host that a
	I1002 20:12:19.614299   32280 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 20:12:19.614308   32280 command_runner.go:130] > # allowed_devices = [
	I1002 20:12:19.614313   32280 command_runner.go:130] > # 	"/dev/fuse",
	I1002 20:12:19.614321   32280 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 20:12:19.614327   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614335   32280 command_runner.go:130] > # List of additional devices. specified as
	I1002 20:12:19.614349   32280 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 20:12:19.614359   32280 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 20:12:19.614368   32280 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:12:19.614376   32280 command_runner.go:130] > # additional_devices = [
	I1002 20:12:19.614381   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614388   32280 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 20:12:19.614394   32280 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 20:12:19.614398   32280 command_runner.go:130] > # 	"/etc/cdi",
	I1002 20:12:19.614402   32280 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 20:12:19.614404   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614410   32280 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 20:12:19.614416   32280 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 20:12:19.614420   32280 command_runner.go:130] > # Defaults to false.
	I1002 20:12:19.614424   32280 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 20:12:19.614432   32280 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 20:12:19.614438   32280 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 20:12:19.614441   32280 command_runner.go:130] > # hooks_dir = [
	I1002 20:12:19.614445   32280 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 20:12:19.614449   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614454   32280 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 20:12:19.614462   32280 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 20:12:19.614467   32280 command_runner.go:130] > # its default mounts from the following two files:
	I1002 20:12:19.614471   32280 command_runner.go:130] > #
	I1002 20:12:19.614476   32280 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 20:12:19.614484   32280 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 20:12:19.614489   32280 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 20:12:19.614494   32280 command_runner.go:130] > #
	I1002 20:12:19.614500   32280 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 20:12:19.614506   32280 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 20:12:19.614514   32280 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 20:12:19.614519   32280 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 20:12:19.614524   32280 command_runner.go:130] > #
	I1002 20:12:19.614528   32280 command_runner.go:130] > # default_mounts_file = ""
	I1002 20:12:19.614532   32280 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 20:12:19.614539   32280 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 20:12:19.614545   32280 command_runner.go:130] > # pids_limit = -1
	I1002 20:12:19.614551   32280 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 20:12:19.614559   32280 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 20:12:19.614564   32280 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 20:12:19.614572   32280 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 20:12:19.614578   32280 command_runner.go:130] > # log_size_max = -1
	I1002 20:12:19.614716   32280 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 20:12:19.614727   32280 command_runner.go:130] > # log_to_journald = false
	I1002 20:12:19.614733   32280 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 20:12:19.614738   32280 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 20:12:19.614745   32280 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 20:12:19.614750   32280 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 20:12:19.614757   32280 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 20:12:19.614761   32280 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 20:12:19.614766   32280 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 20:12:19.614772   32280 command_runner.go:130] > # read_only = false
	I1002 20:12:19.614777   32280 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 20:12:19.614785   32280 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 20:12:19.614789   32280 command_runner.go:130] > # live configuration reload.
	I1002 20:12:19.614795   32280 command_runner.go:130] > # log_level = "info"
	I1002 20:12:19.614800   32280 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 20:12:19.614807   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.614811   32280 command_runner.go:130] > # log_filter = ""
	I1002 20:12:19.614817   32280 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 20:12:19.614825   32280 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 20:12:19.614829   32280 command_runner.go:130] > # separated by comma.
	I1002 20:12:19.614839   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614846   32280 command_runner.go:130] > # uid_mappings = ""
	I1002 20:12:19.614851   32280 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 20:12:19.614859   32280 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 20:12:19.614863   32280 command_runner.go:130] > # separated by comma.
	I1002 20:12:19.614873   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614877   32280 command_runner.go:130] > # gid_mappings = ""
	I1002 20:12:19.614884   32280 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 20:12:19.614890   32280 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:12:19.614898   32280 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:12:19.614905   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614909   32280 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 20:12:19.614916   32280 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 20:12:19.614924   32280 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:12:19.614931   32280 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:12:19.614940   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614944   32280 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 20:12:19.614949   32280 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 20:12:19.614959   32280 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 20:12:19.614964   32280 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 20:12:19.614970   32280 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 20:12:19.614975   32280 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 20:12:19.614983   32280 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 20:12:19.614988   32280 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 20:12:19.614993   32280 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 20:12:19.614999   32280 command_runner.go:130] > # drop_infra_ctr = true
	I1002 20:12:19.615004   32280 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 20:12:19.615009   32280 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 20:12:19.615018   32280 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 20:12:19.615024   32280 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 20:12:19.615031   32280 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 20:12:19.615038   32280 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 20:12:19.615044   32280 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 20:12:19.615052   32280 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 20:12:19.615055   32280 command_runner.go:130] > # shared_cpuset = ""
	I1002 20:12:19.615063   32280 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 20:12:19.615068   32280 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 20:12:19.615073   32280 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 20:12:19.615080   32280 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 20:12:19.615086   32280 command_runner.go:130] > # pinns_path = ""
	I1002 20:12:19.615090   32280 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 20:12:19.615098   32280 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 20:12:19.615102   32280 command_runner.go:130] > # enable_criu_support = true
	I1002 20:12:19.615111   32280 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 20:12:19.615116   32280 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 20:12:19.615123   32280 command_runner.go:130] > # enable_pod_events = false
	I1002 20:12:19.615128   32280 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 20:12:19.615135   32280 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 20:12:19.615139   32280 command_runner.go:130] > # default_runtime = "crun"
	I1002 20:12:19.615146   32280 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 20:12:19.615152   32280 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 20:12:19.615161   32280 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 20:12:19.615168   32280 command_runner.go:130] > # creation as a file is not desired either.
	I1002 20:12:19.615175   32280 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 20:12:19.615182   32280 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 20:12:19.615187   32280 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 20:12:19.615190   32280 command_runner.go:130] > # ]
	I1002 20:12:19.615195   32280 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 20:12:19.615201   32280 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 20:12:19.615207   32280 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 20:12:19.615214   32280 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 20:12:19.615216   32280 command_runner.go:130] > #
	I1002 20:12:19.615221   32280 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 20:12:19.615227   32280 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 20:12:19.615231   32280 command_runner.go:130] > # runtime_type = "oci"
	I1002 20:12:19.615237   32280 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 20:12:19.615241   32280 command_runner.go:130] > # inherit_default_runtime = false
	I1002 20:12:19.615246   32280 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 20:12:19.615252   32280 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 20:12:19.615256   32280 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 20:12:19.615262   32280 command_runner.go:130] > # monitor_env = []
	I1002 20:12:19.615266   32280 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 20:12:19.615270   32280 command_runner.go:130] > # allowed_annotations = []
	I1002 20:12:19.615278   32280 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 20:12:19.615282   32280 command_runner.go:130] > # no_sync_log = false
	I1002 20:12:19.615288   32280 command_runner.go:130] > # default_annotations = {}
	I1002 20:12:19.615293   32280 command_runner.go:130] > # stream_websockets = false
	I1002 20:12:19.615299   32280 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:12:19.615333   32280 command_runner.go:130] > # Where:
	I1002 20:12:19.615343   32280 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 20:12:19.615349   32280 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 20:12:19.615354   32280 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 20:12:19.615363   32280 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 20:12:19.615366   32280 command_runner.go:130] > #   in $PATH.
	I1002 20:12:19.615375   32280 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 20:12:19.615380   32280 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 20:12:19.615387   32280 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 20:12:19.615391   32280 command_runner.go:130] > #   state.
	I1002 20:12:19.615400   32280 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 20:12:19.615413   32280 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 20:12:19.615421   32280 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 20:12:19.615428   32280 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 20:12:19.615435   32280 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 20:12:19.615441   32280 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 20:12:19.615446   32280 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 20:12:19.615452   32280 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 20:12:19.615458   32280 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 20:12:19.615465   32280 command_runner.go:130] > #   The currently recognized values are:
	I1002 20:12:19.615470   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 20:12:19.615479   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 20:12:19.615485   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 20:12:19.615490   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 20:12:19.615499   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 20:12:19.615505   32280 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 20:12:19.615514   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 20:12:19.615521   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 20:12:19.615529   32280 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 20:12:19.615534   32280 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 20:12:19.615541   32280 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 20:12:19.615549   32280 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 20:12:19.615555   32280 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 20:12:19.615564   32280 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 20:12:19.615569   32280 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 20:12:19.615579   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 20:12:19.615586   32280 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 20:12:19.615589   32280 command_runner.go:130] > #   deprecated option "conmon".
	I1002 20:12:19.615596   32280 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 20:12:19.615601   32280 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 20:12:19.615607   32280 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 20:12:19.615614   32280 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 20:12:19.615621   32280 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 20:12:19.615628   32280 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 20:12:19.615634   32280 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 20:12:19.615638   32280 command_runner.go:130] > #   conmon-rs by using:
	I1002 20:12:19.615656   32280 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 20:12:19.615668   32280 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 20:12:19.615682   32280 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 20:12:19.615690   32280 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 20:12:19.615695   32280 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 20:12:19.615704   32280 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 20:12:19.615712   32280 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 20:12:19.615720   32280 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 20:12:19.615731   32280 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 20:12:19.615747   32280 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 20:12:19.615756   32280 command_runner.go:130] > #   when a machine crash happens.
	I1002 20:12:19.615765   32280 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 20:12:19.615774   32280 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 20:12:19.615784   32280 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 20:12:19.615788   32280 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 20:12:19.615797   32280 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 20:12:19.615804   32280 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 20:12:19.615810   32280 command_runner.go:130] > #
	I1002 20:12:19.615818   32280 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 20:12:19.615826   32280 command_runner.go:130] > #
	I1002 20:12:19.615838   32280 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 20:12:19.615850   32280 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 20:12:19.615854   32280 command_runner.go:130] > #
	I1002 20:12:19.615860   32280 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 20:12:19.615868   32280 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 20:12:19.615871   32280 command_runner.go:130] > #
	I1002 20:12:19.615880   32280 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 20:12:19.615889   32280 command_runner.go:130] > # feature.
	I1002 20:12:19.615894   32280 command_runner.go:130] > #
	I1002 20:12:19.615906   32280 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 20:12:19.615918   32280 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 20:12:19.615931   32280 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 20:12:19.615943   32280 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 20:12:19.615954   32280 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 20:12:19.615957   32280 command_runner.go:130] > #
	I1002 20:12:19.615964   32280 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 20:12:19.615972   32280 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 20:12:19.615977   32280 command_runner.go:130] > #
	I1002 20:12:19.615989   32280 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 20:12:19.616001   32280 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 20:12:19.616010   32280 command_runner.go:130] > #
	I1002 20:12:19.616019   32280 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 20:12:19.616031   32280 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 20:12:19.616039   32280 command_runner.go:130] > # limitation.
	I1002 20:12:19.616045   32280 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 20:12:19.616054   32280 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 20:12:19.616058   32280 command_runner.go:130] > runtime_type = ""
	I1002 20:12:19.616063   32280 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 20:12:19.616073   32280 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:12:19.616082   32280 command_runner.go:130] > runtime_config_path = ""
	I1002 20:12:19.616091   32280 command_runner.go:130] > container_min_memory = ""
	I1002 20:12:19.616098   32280 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:12:19.616107   32280 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:12:19.616115   32280 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:12:19.616124   32280 command_runner.go:130] > allowed_annotations = [
	I1002 20:12:19.616131   32280 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 20:12:19.616137   32280 command_runner.go:130] > ]
	I1002 20:12:19.616141   32280 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:12:19.616146   32280 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 20:12:19.616157   32280 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 20:12:19.616163   32280 command_runner.go:130] > runtime_type = ""
	I1002 20:12:19.616173   32280 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 20:12:19.616180   32280 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:12:19.616189   32280 command_runner.go:130] > runtime_config_path = ""
	I1002 20:12:19.616196   32280 command_runner.go:130] > container_min_memory = ""
	I1002 20:12:19.616206   32280 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:12:19.616215   32280 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:12:19.616221   32280 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:12:19.616228   32280 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:12:19.616238   32280 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 20:12:19.616247   32280 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 20:12:19.616258   32280 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 20:12:19.616272   32280 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 20:12:19.616289   32280 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 20:12:19.616305   32280 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 20:12:19.616314   32280 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 20:12:19.616323   32280 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 20:12:19.616340   32280 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 20:12:19.616353   32280 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 20:12:19.616366   32280 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 20:12:19.616380   32280 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 20:12:19.616387   32280 command_runner.go:130] > # Example:
	I1002 20:12:19.616393   32280 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 20:12:19.616401   32280 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 20:12:19.616408   32280 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 20:12:19.616420   32280 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 20:12:19.616430   32280 command_runner.go:130] > # cpuset = "0-1"
	I1002 20:12:19.616435   32280 command_runner.go:130] > # cpushares = "5"
	I1002 20:12:19.616442   32280 command_runner.go:130] > # cpuquota = "1000"
	I1002 20:12:19.616451   32280 command_runner.go:130] > # cpuperiod = "100000"
	I1002 20:12:19.616457   32280 command_runner.go:130] > # cpulimit = "35"
	I1002 20:12:19.616466   32280 command_runner.go:130] > # Where:
	I1002 20:12:19.616473   32280 command_runner.go:130] > # The workload name is workload-type.
	I1002 20:12:19.616483   32280 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 20:12:19.616489   32280 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 20:12:19.616502   32280 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 20:12:19.616516   32280 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 20:12:19.616528   32280 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 20:12:19.616541   32280 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 20:12:19.616551   32280 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 20:12:19.616560   32280 command_runner.go:130] > # Default value is set to true
	I1002 20:12:19.616566   32280 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 20:12:19.616574   32280 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 20:12:19.616582   32280 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 20:12:19.616592   32280 command_runner.go:130] > # Default value is set to 'false'
	I1002 20:12:19.616601   32280 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 20:12:19.616612   32280 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 20:12:19.616624   32280 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 20:12:19.616632   32280 command_runner.go:130] > # timezone = ""
	I1002 20:12:19.616642   32280 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 20:12:19.616658   32280 command_runner.go:130] > #
	I1002 20:12:19.616667   32280 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 20:12:19.616686   32280 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 20:12:19.616695   32280 command_runner.go:130] > [crio.image]
	I1002 20:12:19.616703   32280 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 20:12:19.616714   32280 command_runner.go:130] > # default_transport = "docker://"
	I1002 20:12:19.616725   32280 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 20:12:19.616732   32280 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:12:19.616739   32280 command_runner.go:130] > # global_auth_file = ""
	I1002 20:12:19.616751   32280 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 20:12:19.616762   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.616771   32280 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.616783   32280 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 20:12:19.616795   32280 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:12:19.616804   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.616811   32280 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 20:12:19.616817   32280 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 20:12:19.616825   32280 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 20:12:19.616830   32280 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 20:12:19.616837   32280 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 20:12:19.616842   32280 command_runner.go:130] > # pause_command = "/pause"
	I1002 20:12:19.616852   32280 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 20:12:19.616864   32280 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 20:12:19.616877   32280 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 20:12:19.616889   32280 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 20:12:19.616899   32280 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 20:12:19.616911   32280 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 20:12:19.616918   32280 command_runner.go:130] > # pinned_images = [
	I1002 20:12:19.616921   32280 command_runner.go:130] > # ]
	I1002 20:12:19.616928   32280 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 20:12:19.616937   32280 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 20:12:19.616942   32280 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 20:12:19.616947   32280 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 20:12:19.616955   32280 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 20:12:19.616959   32280 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 20:12:19.616965   32280 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 20:12:19.616973   32280 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 20:12:19.616979   32280 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 20:12:19.616988   32280 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 20:12:19.616997   32280 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 20:12:19.617009   32280 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 20:12:19.617020   32280 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 20:12:19.617036   32280 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 20:12:19.617044   32280 command_runner.go:130] > # changing them here.
	I1002 20:12:19.617053   32280 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 20:12:19.617062   32280 command_runner.go:130] > # insecure_registries = [
	I1002 20:12:19.617066   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617073   32280 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 20:12:19.617078   32280 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 20:12:19.617084   32280 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 20:12:19.617089   32280 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 20:12:19.617095   32280 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 20:12:19.617101   32280 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 20:12:19.617107   32280 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 20:12:19.617111   32280 command_runner.go:130] > # auto_reload_registries = false
	I1002 20:12:19.617117   32280 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 20:12:19.617127   32280 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 20:12:19.617135   32280 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 20:12:19.617138   32280 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 20:12:19.617143   32280 command_runner.go:130] > # The mode of short name resolution.
	I1002 20:12:19.617149   32280 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 20:12:19.617158   32280 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 20:12:19.617163   32280 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 20:12:19.617169   32280 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 20:12:19.617175   32280 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 20:12:19.617182   32280 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 20:12:19.617186   32280 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 20:12:19.617192   32280 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 20:12:19.617197   32280 command_runner.go:130] > # CNI plugins.
	I1002 20:12:19.617200   32280 command_runner.go:130] > [crio.network]
	I1002 20:12:19.617206   32280 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 20:12:19.617212   32280 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 20:12:19.617219   32280 command_runner.go:130] > # cni_default_network = ""
	I1002 20:12:19.617231   32280 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 20:12:19.617240   32280 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 20:12:19.617246   32280 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 20:12:19.617250   32280 command_runner.go:130] > # plugin_dirs = [
	I1002 20:12:19.617254   32280 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 20:12:19.617256   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617261   32280 command_runner.go:130] > # List of included pod metrics.
	I1002 20:12:19.617266   32280 command_runner.go:130] > # included_pod_metrics = [
	I1002 20:12:19.617269   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617274   32280 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 20:12:19.617279   32280 command_runner.go:130] > [crio.metrics]
	I1002 20:12:19.617284   32280 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 20:12:19.617290   32280 command_runner.go:130] > # enable_metrics = false
	I1002 20:12:19.617294   32280 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 20:12:19.617298   32280 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 20:12:19.617306   32280 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 20:12:19.617312   32280 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 20:12:19.617320   32280 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 20:12:19.617323   32280 command_runner.go:130] > # metrics_collectors = [
	I1002 20:12:19.617327   32280 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 20:12:19.617331   32280 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 20:12:19.617334   32280 command_runner.go:130] > # 	"containers_oom_total",
	I1002 20:12:19.617338   32280 command_runner.go:130] > # 	"processes_defunct",
	I1002 20:12:19.617341   32280 command_runner.go:130] > # 	"operations_total",
	I1002 20:12:19.617345   32280 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 20:12:19.617348   32280 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 20:12:19.617352   32280 command_runner.go:130] > # 	"operations_errors_total",
	I1002 20:12:19.617355   32280 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 20:12:19.617359   32280 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 20:12:19.617363   32280 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 20:12:19.617367   32280 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 20:12:19.617371   32280 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 20:12:19.617375   32280 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 20:12:19.617379   32280 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 20:12:19.617383   32280 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 20:12:19.617388   32280 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 20:12:19.617391   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617397   32280 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 20:12:19.617403   32280 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 20:12:19.617407   32280 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 20:12:19.617411   32280 command_runner.go:130] > # metrics_port = 9090
	I1002 20:12:19.617415   32280 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 20:12:19.617419   32280 command_runner.go:130] > # metrics_socket = ""
	I1002 20:12:19.617423   32280 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 20:12:19.617429   32280 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 20:12:19.617437   32280 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 20:12:19.617441   32280 command_runner.go:130] > # certificate on any modification event.
	I1002 20:12:19.617447   32280 command_runner.go:130] > # metrics_cert = ""
	I1002 20:12:19.617452   32280 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 20:12:19.617456   32280 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 20:12:19.617460   32280 command_runner.go:130] > # metrics_key = ""
	I1002 20:12:19.617465   32280 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 20:12:19.617471   32280 command_runner.go:130] > [crio.tracing]
	I1002 20:12:19.617476   32280 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 20:12:19.617482   32280 command_runner.go:130] > # enable_tracing = false
	I1002 20:12:19.617488   32280 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 20:12:19.617494   32280 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 20:12:19.617500   32280 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 20:12:19.617506   32280 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 20:12:19.617511   32280 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 20:12:19.617514   32280 command_runner.go:130] > [crio.nri]
	I1002 20:12:19.617518   32280 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 20:12:19.617524   32280 command_runner.go:130] > # enable_nri = true
	I1002 20:12:19.617527   32280 command_runner.go:130] > # NRI socket to listen on.
	I1002 20:12:19.617533   32280 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 20:12:19.617539   32280 command_runner.go:130] > # NRI plugin directory to use.
	I1002 20:12:19.617543   32280 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 20:12:19.617547   32280 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 20:12:19.617552   32280 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 20:12:19.617560   32280 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 20:12:19.617591   32280 command_runner.go:130] > # nri_disable_connections = false
	I1002 20:12:19.617598   32280 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 20:12:19.617604   32280 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 20:12:19.617612   32280 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 20:12:19.617623   32280 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 20:12:19.617630   32280 command_runner.go:130] > # NRI default validator configuration.
	I1002 20:12:19.617637   32280 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 20:12:19.617645   32280 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 20:12:19.617661   32280 command_runner.go:130] > # can be restricted/rejected:
	I1002 20:12:19.617671   32280 command_runner.go:130] > # - OCI hook injection
	I1002 20:12:19.617683   32280 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 20:12:19.617691   32280 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 20:12:19.617696   32280 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 20:12:19.617702   32280 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 20:12:19.617708   32280 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 20:12:19.617715   32280 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 20:12:19.617720   32280 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 20:12:19.617722   32280 command_runner.go:130] > #
	I1002 20:12:19.617726   32280 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 20:12:19.617733   32280 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 20:12:19.617737   32280 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 20:12:19.617743   32280 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 20:12:19.617750   32280 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 20:12:19.617755   32280 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 20:12:19.617759   32280 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 20:12:19.617764   32280 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 20:12:19.617767   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617771   32280 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 20:12:19.617779   32280 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 20:12:19.617782   32280 command_runner.go:130] > [crio.stats]
	I1002 20:12:19.617787   32280 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 20:12:19.617796   32280 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 20:12:19.617800   32280 command_runner.go:130] > # stats_collection_period = 0
	I1002 20:12:19.617807   32280 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 20:12:19.617812   32280 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 20:12:19.617819   32280 command_runner.go:130] > # collection_period = 0
	I1002 20:12:19.617847   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597735388Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 20:12:19.617857   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597762161Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 20:12:19.617879   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597788561Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 20:12:19.617891   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597814431Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 20:12:19.617901   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597905829Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:19.617910   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.59812179Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 20:12:19.617937   32280 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 20:12:19.618034   32280 cni.go:84] Creating CNI manager for ""
	I1002 20:12:19.618045   32280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:12:19.618055   32280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:12:19.618074   32280 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753218 NodeName:functional-753218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:12:19.618185   32280 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:12:19.618237   32280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:12:19.625483   32280 command_runner.go:130] > kubeadm
	I1002 20:12:19.625499   32280 command_runner.go:130] > kubectl
	I1002 20:12:19.625503   32280 command_runner.go:130] > kubelet
	I1002 20:12:19.626080   32280 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:12:19.626131   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:12:19.633273   32280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:12:19.644695   32280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:12:19.656113   32280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:12:19.667414   32280 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:12:19.670740   32280 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 20:12:19.670794   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:19.752159   32280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:12:19.764280   32280 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218 for IP: 192.168.49.2
	I1002 20:12:19.764303   32280 certs.go:195] generating shared ca certs ...
	I1002 20:12:19.764324   32280 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:19.764461   32280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:12:19.764507   32280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:12:19.764516   32280 certs.go:257] generating profile certs ...
	I1002 20:12:19.764596   32280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key
	I1002 20:12:19.764641   32280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804
	I1002 20:12:19.764700   32280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key
	I1002 20:12:19.764711   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:12:19.764723   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:12:19.764735   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:12:19.764749   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:12:19.764761   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:12:19.764773   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:12:19.764785   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:12:19.764797   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:12:19.764840   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:12:19.764868   32280 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:12:19.764878   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:12:19.764907   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:12:19.764932   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:12:19.764953   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:12:19.764991   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:12:19.765016   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:19.765029   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.765042   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:12:19.765474   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:12:19.782548   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:12:19.799734   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:12:19.816390   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:12:19.832589   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:12:19.848700   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:12:19.864849   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:12:19.880775   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:12:19.896846   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:12:19.913614   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:12:19.929578   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:12:19.945677   32280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:12:19.957745   32280 ssh_runner.go:195] Run: openssl version
	I1002 20:12:19.963258   32280 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 20:12:19.963501   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:12:19.971695   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975234   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975257   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975294   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:12:20.009021   32280 command_runner.go:130] > 51391683
	I1002 20:12:20.009100   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:12:20.016966   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:12:20.025422   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029194   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029238   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029282   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.064218   32280 command_runner.go:130] > 3ec20f2e
	I1002 20:12:20.064321   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:12:20.072502   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:12:20.080739   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084507   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084542   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084576   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.118973   32280 command_runner.go:130] > b5213941
	I1002 20:12:20.119045   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:12:20.127219   32280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:12:20.130733   32280 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:12:20.130756   32280 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 20:12:20.130765   32280 command_runner.go:130] > Device: 8,1	Inode: 579408      Links: 1
	I1002 20:12:20.130774   32280 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:12:20.130783   32280 command_runner.go:130] > Access: 2025-10-02 20:08:10.644972655 +0000
	I1002 20:12:20.130793   32280 command_runner.go:130] > Modify: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130799   32280 command_runner.go:130] > Change: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130806   32280 command_runner.go:130] >  Birth: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130872   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:12:20.164340   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.164601   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:12:20.199434   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.199512   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:12:20.233489   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.233589   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:12:20.266980   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.267235   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:12:20.300792   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.301105   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:12:20.334621   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.334895   32280 kubeadm.go:400] StartCluster: {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:20.334978   32280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:12:20.335040   32280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:12:20.362233   32280 cri.go:89] found id: ""
	I1002 20:12:20.362287   32280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:12:20.370000   32280 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 20:12:20.370022   32280 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 20:12:20.370028   32280 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 20:12:20.370045   32280 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:12:20.370050   32280 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:12:20.370092   32280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:12:20.377231   32280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:12:20.377306   32280 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753218" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.377343   32280 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753218" cluster setting kubeconfig missing "functional-753218" context setting]
	I1002 20:12:20.377618   32280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.379016   32280 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.379143   32280 kapi.go:59] client config for functional-753218: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:12:20.379525   32280 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:12:20.379543   32280 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:12:20.379548   32280 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:12:20.379552   32280 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:12:20.379556   32280 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:12:20.379580   32280 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:12:20.379896   32280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:12:20.387047   32280 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:12:20.387086   32280 kubeadm.go:601] duration metric: took 17.030465ms to restartPrimaryControlPlane
	I1002 20:12:20.387097   32280 kubeadm.go:402] duration metric: took 52.210982ms to StartCluster
	I1002 20:12:20.387113   32280 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.387221   32280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.387762   32280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.387978   32280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:12:20.388069   32280 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:12:20.388123   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:20.388170   32280 addons.go:69] Setting storage-provisioner=true in profile "functional-753218"
	I1002 20:12:20.388189   32280 addons.go:238] Setting addon storage-provisioner=true in "functional-753218"
	I1002 20:12:20.388224   32280 host.go:66] Checking if "functional-753218" exists ...
	I1002 20:12:20.388188   32280 addons.go:69] Setting default-storageclass=true in profile "functional-753218"
	I1002 20:12:20.388303   32280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753218"
	I1002 20:12:20.388534   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.388593   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.390858   32280 out.go:179] * Verifying Kubernetes components...
	I1002 20:12:20.392041   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:20.408831   32280 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.409013   32280 kapi.go:59] client config for functional-753218: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:12:20.409334   32280 addons.go:238] Setting addon default-storageclass=true in "functional-753218"
	I1002 20:12:20.409372   32280 host.go:66] Checking if "functional-753218" exists ...
	I1002 20:12:20.409857   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.409921   32280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:12:20.411389   32280 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:20.411408   32280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:12:20.411451   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:20.434249   32280 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.434269   32280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:12:20.434323   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:20.437366   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:20.453124   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:20.491163   32280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:12:20.504681   32280 node_ready.go:35] waiting up to 6m0s for node "functional-753218" to be "Ready" ...
	I1002 20:12:20.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:12:20.504901   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:20.505187   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:20.544925   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:20.560749   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.598254   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.598305   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.598334   32280 retry.go:31] will retry after 360.790251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.611750   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.611829   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.611854   32280 retry.go:31] will retry after 210.270105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.822270   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.872283   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.874485   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.874514   32280 retry.go:31] will retry after 244.966298ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.959846   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.005341   32280 type.go:168] "Request Body" body=""
	I1002 20:12:21.005421   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:21.005781   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:21.012418   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.012451   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.012466   32280 retry.go:31] will retry after 409.292121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.119728   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:21.168429   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.170739   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.170771   32280 retry.go:31] will retry after 294.217693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.422106   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.465688   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:21.470239   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.472502   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.472537   32280 retry.go:31] will retry after 332.995728ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.505685   32280 type.go:168] "Request Body" body=""
	I1002 20:12:21.505778   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:21.506123   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:21.516911   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.516971   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.516996   32280 retry.go:31] will retry after 954.810325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.806393   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.857573   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.857614   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.857637   32280 retry.go:31] will retry after 1.033500231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.004877   32280 type.go:168] "Request Body" body=""
	I1002 20:12:22.004976   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:22.005310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:22.472906   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:22.505435   32280 type.go:168] "Request Body" body=""
	I1002 20:12:22.505517   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:22.505893   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:22.505957   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:22.524411   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:22.524454   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.524474   32280 retry.go:31] will retry after 931.915639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.892005   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:22.942851   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:22.942928   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.942955   32280 retry.go:31] will retry after 1.834952264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.004927   32280 type.go:168] "Request Body" body=""
	I1002 20:12:23.005007   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:23.005354   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:23.456821   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:23.505094   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.505147   32280 type.go:168] "Request Body" body=""
	I1002 20:12:23.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:23.505484   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:23.507597   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.507626   32280 retry.go:31] will retry after 2.313716894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:24.005157   32280 type.go:168] "Request Body" body=""
	I1002 20:12:24.005267   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:24.005594   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:24.505508   32280 type.go:168] "Request Body" body=""
	I1002 20:12:24.505632   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:24.506012   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:24.506092   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:24.778419   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:24.830315   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:24.830361   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:24.830382   32280 retry.go:31] will retry after 2.530323246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:25.005736   32280 type.go:168] "Request Body" body=""
	I1002 20:12:25.005808   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:25.006117   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:25.504853   32280 type.go:168] "Request Body" body=""
	I1002 20:12:25.504920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:25.505273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:25.821714   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:25.872812   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:25.872859   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:25.872881   32280 retry.go:31] will retry after 1.957365536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:26.005078   32280 type.go:168] "Request Body" body=""
	I1002 20:12:26.005153   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:26.005503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:26.505250   32280 type.go:168] "Request Body" body=""
	I1002 20:12:26.505323   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:26.505711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:27.005530   32280 type.go:168] "Request Body" body=""
	I1002 20:12:27.005599   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:27.005959   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:27.006023   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:27.361473   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:27.411520   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:27.413776   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.413807   32280 retry.go:31] will retry after 3.768585845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.504922   32280 type.go:168] "Request Body" body=""
	I1002 20:12:27.505019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:27.505351   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:27.830904   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:27.880071   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:27.882324   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.882350   32280 retry.go:31] will retry after 2.676983733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:28.005719   32280 type.go:168] "Request Body" body=""
	I1002 20:12:28.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:28.006101   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:28.504826   32280 type.go:168] "Request Body" body=""
	I1002 20:12:28.504909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:28.505226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:29.004968   32280 type.go:168] "Request Body" body=""
	I1002 20:12:29.005052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:29.005361   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:29.505178   32280 type.go:168] "Request Body" body=""
	I1002 20:12:29.505270   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:29.505576   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:29.505628   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:30.005335   32280 type.go:168] "Request Body" body=""
	I1002 20:12:30.005400   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:30.005747   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:30.505557   32280 type.go:168] "Request Body" body=""
	I1002 20:12:30.505643   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:30.505971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:30.560186   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:30.610807   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:30.610870   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:30.610892   32280 retry.go:31] will retry after 7.973230912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.005274   32280 type.go:168] "Request Body" body=""
	I1002 20:12:31.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:31.005789   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:31.182990   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:31.231953   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:31.234462   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.234491   32280 retry.go:31] will retry after 5.687657455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.504842   32280 type.go:168] "Request Body" body=""
	I1002 20:12:31.504956   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:31.505254   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:32.005885   32280 type.go:168] "Request Body" body=""
	I1002 20:12:32.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:32.006262   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:32.006314   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:32.504840   32280 type.go:168] "Request Body" body=""
	I1002 20:12:32.504910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:32.505216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:33.005827   32280 type.go:168] "Request Body" body=""
	I1002 20:12:33.005903   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:33.006210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:33.505861   32280 type.go:168] "Request Body" body=""
	I1002 20:12:33.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:33.506234   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:34.005834   32280 type.go:168] "Request Body" body=""
	I1002 20:12:34.005939   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:34.006292   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:34.006347   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:34.505067   32280 type.go:168] "Request Body" body=""
	I1002 20:12:34.505178   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:34.505476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:35.005027   32280 type.go:168] "Request Body" body=""
	I1002 20:12:35.005102   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:35.005423   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:35.504956   32280 type.go:168] "Request Body" body=""
	I1002 20:12:35.505018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:35.505338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:36.004897   32280 type.go:168] "Request Body" body=""
	I1002 20:12:36.005010   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:36.005325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:36.504908   32280 type.go:168] "Request Body" body=""
	I1002 20:12:36.504975   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:36.505273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:36.505325   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:36.922844   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:36.972691   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:36.975093   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:36.975120   32280 retry.go:31] will retry after 6.057609391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:37.005334   32280 type.go:168] "Request Body" body=""
	I1002 20:12:37.005422   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:37.005758   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:37.505360   32280 type.go:168] "Request Body" body=""
	I1002 20:12:37.505473   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:37.505826   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:38.005595   32280 type.go:168] "Request Body" body=""
	I1002 20:12:38.005685   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:38.005995   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:38.505731   32280 type.go:168] "Request Body" body=""
	I1002 20:12:38.505833   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:38.506204   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:38.506258   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:38.584343   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:38.634498   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:38.634541   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:38.634559   32280 retry.go:31] will retry after 11.473349324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:39.004966   32280 type.go:168] "Request Body" body=""
	I1002 20:12:39.005047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:39.005329   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:39.505287   32280 type.go:168] "Request Body" body=""
	I1002 20:12:39.505349   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:39.505690   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:40.005217   32280 type.go:168] "Request Body" body=""
	I1002 20:12:40.005283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:40.005689   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:40.505522   32280 type.go:168] "Request Body" body=""
	I1002 20:12:40.505586   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:40.505931   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:41.005519   32280 type.go:168] "Request Body" body=""
	I1002 20:12:41.005620   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:41.005984   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:41.006049   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:41.505595   32280 type.go:168] "Request Body" body=""
	I1002 20:12:41.505678   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:41.506021   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:42.005588   32280 type.go:168] "Request Body" body=""
	I1002 20:12:42.005666   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:42.005990   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:42.505580   32280 type.go:168] "Request Body" body=""
	I1002 20:12:42.505660   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:42.506010   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:43.005624   32280 type.go:168] "Request Body" body=""
	I1002 20:12:43.005704   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:43.006025   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:43.006077   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:43.033216   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:43.084626   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:43.084680   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:43.084700   32280 retry.go:31] will retry after 13.696949746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:43.504971   32280 type.go:168] "Request Body" body=""
	I1002 20:12:43.505052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:43.505379   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:44.004912   32280 type.go:168] "Request Body" body=""
	I1002 20:12:44.004988   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:44.005285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:44.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:12:44.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:44.505402   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:45.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:12:45.005026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:45.005321   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:45.504904   32280 type.go:168] "Request Body" body=""
	I1002 20:12:45.504997   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:45.505300   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:45.505354   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:46.004960   32280 type.go:168] "Request Body" body=""
	I1002 20:12:46.005023   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:46.005350   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:46.504882   32280 type.go:168] "Request Body" body=""
	I1002 20:12:46.505005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:46.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:47.004909   32280 type.go:168] "Request Body" body=""
	I1002 20:12:47.004973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:47.005265   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:47.505882   32280 type.go:168] "Request Body" body=""
	I1002 20:12:47.506000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:47.506320   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:47.506400   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:48.004928   32280 type.go:168] "Request Body" body=""
	I1002 20:12:48.005004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:48.005305   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:48.504865   32280 type.go:168] "Request Body" body=""
	I1002 20:12:48.504959   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:48.505270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:49.004954   32280 type.go:168] "Request Body" body=""
	I1002 20:12:49.005020   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:49.005323   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:49.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:12:49.505108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:49.505418   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:50.004957   32280 type.go:168] "Request Body" body=""
	I1002 20:12:50.005023   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:50.005336   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:50.005399   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:50.108603   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:50.158622   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:50.158675   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:50.158705   32280 retry.go:31] will retry after 7.866512619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:50.505487   32280 type.go:168] "Request Body" body=""
	I1002 20:12:50.505555   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:50.505903   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:51.005559   32280 type.go:168] "Request Body" body=""
	I1002 20:12:51.005635   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:51.005990   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:51.505707   32280 type.go:168] "Request Body" body=""
	I1002 20:12:51.505791   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:51.506153   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:52.005777   32280 type.go:168] "Request Body" body=""
	I1002 20:12:52.005901   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:52.006225   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:52.006281   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:52.504874   32280 type.go:168] "Request Body" body=""
	I1002 20:12:52.504935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:52.505268   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:53.005873   32280 type.go:168] "Request Body" body=""
	I1002 20:12:53.005951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:53.006260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:53.504881   32280 type.go:168] "Request Body" body=""
	I1002 20:12:53.505006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:53.505318   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:54.004965   32280 type.go:168] "Request Body" body=""
	I1002 20:12:54.005040   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:54.005355   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:54.505336   32280 type.go:168] "Request Body" body=""
	I1002 20:12:54.505429   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:54.505803   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:54.505860   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:55.005500   32280 type.go:168] "Request Body" body=""
	I1002 20:12:55.005582   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:55.005971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:55.505630   32280 type.go:168] "Request Body" body=""
	I1002 20:12:55.505727   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:55.506074   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:56.005749   32280 type.go:168] "Request Body" body=""
	I1002 20:12:56.005828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:56.006175   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:56.505817   32280 type.go:168] "Request Body" body=""
	I1002 20:12:56.505912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:56.506244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:56.506305   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:56.782639   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:56.831722   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:56.833971   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:56.834005   32280 retry.go:31] will retry after 8.803585786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:57.005357   32280 type.go:168] "Request Body" body=""
	I1002 20:12:57.005440   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:57.005756   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:57.505340   32280 type.go:168] "Request Body" body=""
	I1002 20:12:57.505420   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:57.505751   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:58.005333   32280 type.go:168] "Request Body" body=""
	I1002 20:12:58.005402   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:58.005752   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:58.025966   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:58.074036   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:58.076335   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:58.076367   32280 retry.go:31] will retry after 21.837732561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:58.504884   32280 type.go:168] "Request Body" body=""
	I1002 20:12:58.504952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:58.505269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:59.005019   32280 type.go:168] "Request Body" body=""
	I1002 20:12:59.005088   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:59.005416   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:59.005476   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:59.505294   32280 type.go:168] "Request Body" body=""
	I1002 20:12:59.505371   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:59.505719   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:00.005587   32280 type.go:168] "Request Body" body=""
	I1002 20:13:00.005681   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:00.006070   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:00.505895   32280 type.go:168] "Request Body" body=""
	I1002 20:13:00.505970   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:00.506282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:01.005032   32280 type.go:168] "Request Body" body=""
	I1002 20:13:01.005101   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:01.005454   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:01.005507   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:01.505230   32280 type.go:168] "Request Body" body=""
	I1002 20:13:01.505332   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:01.505713   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:02.005565   32280 type.go:168] "Request Body" body=""
	I1002 20:13:02.005638   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:02.005989   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:02.505747   32280 type.go:168] "Request Body" body=""
	I1002 20:13:02.505834   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:02.506161   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:03.004921   32280 type.go:168] "Request Body" body=""
	I1002 20:13:03.004999   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:03.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:03.505030   32280 type.go:168] "Request Body" body=""
	I1002 20:13:03.505163   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:03.505496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:03.505553   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:04.005013   32280 type.go:168] "Request Body" body=""
	I1002 20:13:04.005102   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:04.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:04.505235   32280 type.go:168] "Request Body" body=""
	I1002 20:13:04.505310   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:04.505603   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:05.005373   32280 type.go:168] "Request Body" body=""
	I1002 20:13:05.005436   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:05.005779   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:05.505626   32280 type.go:168] "Request Body" body=""
	I1002 20:13:05.505713   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:05.506017   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:05.506071   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:05.638454   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:05.690182   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:05.690237   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:05.690256   32280 retry.go:31] will retry after 17.824989731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:06.005701   32280 type.go:168] "Request Body" body=""
	I1002 20:13:06.005799   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:06.006119   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:06.504842   32280 type.go:168] "Request Body" body=""
	I1002 20:13:06.504914   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:06.505251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:07.005004   32280 type.go:168] "Request Body" body=""
	I1002 20:13:07.005108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:07.005436   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:07.505210   32280 type.go:168] "Request Body" body=""
	I1002 20:13:07.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:07.505609   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:08.005363   32280 type.go:168] "Request Body" body=""
	I1002 20:13:08.005446   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:08.005783   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:08.005845   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:08.505633   32280 type.go:168] "Request Body" body=""
	I1002 20:13:08.505725   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:08.506087   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:09.004810   32280 type.go:168] "Request Body" body=""
	I1002 20:13:09.004939   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:09.005246   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:09.505036   32280 type.go:168] "Request Body" body=""
	I1002 20:13:09.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:09.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:10.005227   32280 type.go:168] "Request Body" body=""
	I1002 20:13:10.005294   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:10.005624   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:10.505218   32280 type.go:168] "Request Body" body=""
	I1002 20:13:10.505284   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:10.505609   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:10.505692   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:11.005490   32280 type.go:168] "Request Body" body=""
	I1002 20:13:11.005558   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:11.005879   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:11.505739   32280 type.go:168] "Request Body" body=""
	I1002 20:13:11.505817   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:11.506182   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:12.004937   32280 type.go:168] "Request Body" body=""
	I1002 20:13:12.005026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:12.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:12.505102   32280 type.go:168] "Request Body" body=""
	I1002 20:13:12.505168   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:12.505509   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:13.005242   32280 type.go:168] "Request Body" body=""
	I1002 20:13:13.005316   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:13.005692   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:13.005741   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:13.505519   32280 type.go:168] "Request Body" body=""
	I1002 20:13:13.505584   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:13.505958   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:14.005767   32280 type.go:168] "Request Body" body=""
	I1002 20:13:14.005841   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:14.006164   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:14.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:13:14.505069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:14.505397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:15.005101   32280 type.go:168] "Request Body" body=""
	I1002 20:13:15.005189   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:15.005569   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:15.505328   32280 type.go:168] "Request Body" body=""
	I1002 20:13:15.505404   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:15.505799   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:15.505864   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:16.005581   32280 type.go:168] "Request Body" body=""
	I1002 20:13:16.005659   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:16.006015   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:16.505815   32280 type.go:168] "Request Body" body=""
	I1002 20:13:16.505909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:16.506240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:17.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:13:17.004989   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:17.005317   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:17.505042   32280 type.go:168] "Request Body" body=""
	I1002 20:13:17.505108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:17.505466   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:18.005185   32280 type.go:168] "Request Body" body=""
	I1002 20:13:18.005248   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:18.005594   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:18.005675   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:18.505365   32280 type.go:168] "Request Body" body=""
	I1002 20:13:18.505431   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:18.505829   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.005617   32280 type.go:168] "Request Body" body=""
	I1002 20:13:19.005703   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:19.006054   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.505860   32280 type.go:168] "Request Body" body=""
	I1002 20:13:19.505925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:19.506274   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.914795   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:13:19.964946   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:19.964982   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:19.964998   32280 retry.go:31] will retry after 37.877741779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:20.005163   32280 type.go:168] "Request Body" body=""
	I1002 20:13:20.005260   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:20.005579   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:20.505603   32280 type.go:168] "Request Body" body=""
	I1002 20:13:20.505696   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:20.506040   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:20.506105   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:21.005687   32280 type.go:168] "Request Body" body=""
	I1002 20:13:21.005752   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:21.006074   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:21.505754   32280 type.go:168] "Request Body" body=""
	I1002 20:13:21.505828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:21.506211   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:22.005841   32280 type.go:168] "Request Body" body=""
	I1002 20:13:22.005906   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:22.006231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:22.505901   32280 type.go:168] "Request Body" body=""
	I1002 20:13:22.506010   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:22.506365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:22.506463   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:23.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:13:23.005035   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:23.005390   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:23.504963   32280 type.go:168] "Request Body" body=""
	I1002 20:13:23.505048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:23.505365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:23.515608   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:23.566822   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:23.566879   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:23.566903   32280 retry.go:31] will retry after 23.13190401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:24.005366   32280 type.go:168] "Request Body" body=""
	I1002 20:13:24.005433   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:24.005789   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:24.505700   32280 type.go:168] "Request Body" body=""
	I1002 20:13:24.505774   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:24.506172   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:25.005817   32280 type.go:168] "Request Body" body=""
	I1002 20:13:25.005885   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:25.006218   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:25.006274   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:25.505892   32280 type.go:168] "Request Body" body=""
	I1002 20:13:25.505960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:25.506325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:26.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:13:26.005093   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:26.005420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:26.505011   32280 type.go:168] "Request Body" body=""
	I1002 20:13:26.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:26.505477   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:27.005016   32280 type.go:168] "Request Body" body=""
	I1002 20:13:27.005085   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:27.005415   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:27.505000   32280 type.go:168] "Request Body" body=""
	I1002 20:13:27.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:27.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:27.505471   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:28.004995   32280 type.go:168] "Request Body" body=""
	I1002 20:13:28.005059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:28.005387   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:28.504979   32280 type.go:168] "Request Body" body=""
	I1002 20:13:28.505063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:28.505426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:29.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:13:29.005070   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:29.005385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:29.505292   32280 type.go:168] "Request Body" body=""
	I1002 20:13:29.505364   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:29.505745   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:29.505830   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:30.005263   32280 type.go:168] "Request Body" body=""
	I1002 20:13:30.005354   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:30.005711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:30.505565   32280 type.go:168] "Request Body" body=""
	I1002 20:13:30.505630   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:30.505975   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:31.005629   32280 type.go:168] "Request Body" body=""
	I1002 20:13:31.005725   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:31.006066   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:31.505717   32280 type.go:168] "Request Body" body=""
	I1002 20:13:31.505806   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:31.506146   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:31.506205   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:32.005772   32280 type.go:168] "Request Body" body=""
	I1002 20:13:32.005834   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:32.006141   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:32.505757   32280 type.go:168] "Request Body" body=""
	I1002 20:13:32.505827   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:32.506202   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:33.005813   32280 type.go:168] "Request Body" body=""
	I1002 20:13:33.005879   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:33.006207   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:33.505854   32280 type.go:168] "Request Body" body=""
	I1002 20:13:33.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:33.506299   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:33.506364   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:34.004865   32280 type.go:168] "Request Body" body=""
	I1002 20:13:34.004937   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:34.005277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:34.505059   32280 type.go:168] "Request Body" body=""
	I1002 20:13:34.505145   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:34.505557   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:35.005136   32280 type.go:168] "Request Body" body=""
	I1002 20:13:35.005210   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:35.005522   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:35.505130   32280 type.go:168] "Request Body" body=""
	I1002 20:13:35.505200   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:35.505574   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:36.005135   32280 type.go:168] "Request Body" body=""
	I1002 20:13:36.005205   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:36.005539   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:36.005593   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:36.505113   32280 type.go:168] "Request Body" body=""
	I1002 20:13:36.505181   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:36.505623   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:37.005206   32280 type.go:168] "Request Body" body=""
	I1002 20:13:37.005280   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:37.005599   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:37.505187   32280 type.go:168] "Request Body" body=""
	I1002 20:13:37.505253   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:37.505612   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:38.005212   32280 type.go:168] "Request Body" body=""
	I1002 20:13:38.005309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:38.005632   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:38.005716   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:38.505228   32280 type.go:168] "Request Body" body=""
	I1002 20:13:38.505309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:38.505743   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:39.005283   32280 type.go:168] "Request Body" body=""
	I1002 20:13:39.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:39.005688   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:39.505535   32280 type.go:168] "Request Body" body=""
	I1002 20:13:39.505601   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:39.505971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:40.005741   32280 type.go:168] "Request Body" body=""
	I1002 20:13:40.005811   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:40.006142   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:40.006200   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:40.504914   32280 type.go:168] "Request Body" body=""
	I1002 20:13:40.504981   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:40.505341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:41.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:13:41.004987   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:41.005308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:41.504899   32280 type.go:168] "Request Body" body=""
	I1002 20:13:41.504961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:41.505269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:42.004835   32280 type.go:168] "Request Body" body=""
	I1002 20:13:42.004910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:42.005229   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:42.504815   32280 type.go:168] "Request Body" body=""
	I1002 20:13:42.504896   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:42.505252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:42.505312   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:43.004917   32280 type.go:168] "Request Body" body=""
	I1002 20:13:43.004992   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:43.005315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:43.504906   32280 type.go:168] "Request Body" body=""
	I1002 20:13:43.504998   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:43.505371   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:44.004978   32280 type.go:168] "Request Body" body=""
	I1002 20:13:44.005041   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:44.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:44.505515   32280 type.go:168] "Request Body" body=""
	I1002 20:13:44.505582   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:44.505949   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:44.505999   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:45.005614   32280 type.go:168] "Request Body" body=""
	I1002 20:13:45.005720   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:45.006047   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:45.505675   32280 type.go:168] "Request Body" body=""
	I1002 20:13:45.505766   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:45.506082   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:46.005784   32280 type.go:168] "Request Body" body=""
	I1002 20:13:46.005862   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:46.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:46.505803   32280 type.go:168] "Request Body" body=""
	I1002 20:13:46.505894   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:46.506217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:46.506269   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:46.699644   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:46.747344   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:46.749844   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:46.749973   32280 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:13:47.005313   32280 type.go:168] "Request Body" body=""
	I1002 20:13:47.005446   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:47.005788   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:47.505665   32280 type.go:168] "Request Body" body=""
	I1002 20:13:47.505730   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:47.506069   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:48.005897   32280 type.go:168] "Request Body" body=""
	I1002 20:13:48.005960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:48.006265   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:48.505011   32280 type.go:168] "Request Body" body=""
	I1002 20:13:48.505103   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:48.505428   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:49.005178   32280 type.go:168] "Request Body" body=""
	I1002 20:13:49.005244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:49.005588   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:49.005688   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:49.505357   32280 type.go:168] "Request Body" body=""
	I1002 20:13:49.505439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:49.505750   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:50.005608   32280 type.go:168] "Request Body" body=""
	I1002 20:13:50.005698   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:50.006038   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:50.504871   32280 type.go:168] "Request Body" body=""
	I1002 20:13:50.504975   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:50.505342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:51.005115   32280 type.go:168] "Request Body" body=""
	I1002 20:13:51.005179   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:51.005488   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:51.505213   32280 type.go:168] "Request Body" body=""
	I1002 20:13:51.505301   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:51.505613   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:51.505717   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:52.005522   32280 type.go:168] "Request Body" body=""
	I1002 20:13:52.005612   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:52.005939   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:52.505742   32280 type.go:168] "Request Body" body=""
	I1002 20:13:52.505819   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:52.506150   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:53.004884   32280 type.go:168] "Request Body" body=""
	I1002 20:13:53.004954   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:53.005266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:53.505024   32280 type.go:168] "Request Body" body=""
	I1002 20:13:53.505129   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:53.505472   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:54.005163   32280 type.go:168] "Request Body" body=""
	I1002 20:13:54.005247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:54.005550   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:54.005630   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:54.505374   32280 type.go:168] "Request Body" body=""
	I1002 20:13:54.505439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:54.505844   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:55.005681   32280 type.go:168] "Request Body" body=""
	I1002 20:13:55.005746   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:55.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:55.504859   32280 type.go:168] "Request Body" body=""
	I1002 20:13:55.504950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:55.505290   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:56.004973   32280 type.go:168] "Request Body" body=""
	I1002 20:13:56.005052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:56.005360   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:56.505092   32280 type.go:168] "Request Body" body=""
	I1002 20:13:56.505157   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:56.505484   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:56.505543   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:57.005232   32280 type.go:168] "Request Body" body=""
	I1002 20:13:57.005319   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:57.005627   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:57.505479   32280 type.go:168] "Request Body" body=""
	I1002 20:13:57.505542   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:57.505874   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:57.843521   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:13:57.893953   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:57.894023   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:57.894118   32280 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:13:57.896474   32280 out.go:179] * Enabled addons: 
	I1002 20:13:57.898063   32280 addons.go:514] duration metric: took 1m37.510002204s for enable addons: enabled=[]
	I1002 20:13:58.005248   32280 type.go:168] "Request Body" body=""
	I1002 20:13:58.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:58.005671   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:58.505487   32280 type.go:168] "Request Body" body=""
	I1002 20:13:58.505565   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:58.505958   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:58.506014   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:59.005771   32280 type.go:168] "Request Body" body=""
	I1002 20:13:59.005876   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:59.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:59.504962   32280 type.go:168] "Request Body" body=""
	I1002 20:13:59.505047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:59.505359   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:00.005006   32280 type.go:168] "Request Body" body=""
	I1002 20:14:00.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:00.005392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:00.505111   32280 type.go:168] "Request Body" body=""
	I1002 20:14:00.505199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:00.505503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:01.005227   32280 type.go:168] "Request Body" body=""
	I1002 20:14:01.005326   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:01.005717   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:01.005789   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:01.505598   32280 type.go:168] "Request Body" body=""
	I1002 20:14:01.505687   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:01.506000   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:02.005861   32280 type.go:168] "Request Body" body=""
	I1002 20:14:02.005935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:02.006338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:02.504980   32280 type.go:168] "Request Body" body=""
	I1002 20:14:02.505043   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:02.505444   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:03.005199   32280 type.go:168] "Request Body" body=""
	I1002 20:14:03.005295   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:03.005617   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:03.505417   32280 type.go:168] "Request Body" body=""
	I1002 20:14:03.505500   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:03.505831   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:03.505910   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:04.005688   32280 type.go:168] "Request Body" body=""
	I1002 20:14:04.005768   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:04.006079   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:04.505822   32280 type.go:168] "Request Body" body=""
	I1002 20:14:04.505929   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:04.506212   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:05.004939   32280 type.go:168] "Request Body" body=""
	I1002 20:14:05.005032   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:05.005365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:05.505085   32280 type.go:168] "Request Body" body=""
	I1002 20:14:05.505167   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:05.505489   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:06.005229   32280 type.go:168] "Request Body" body=""
	I1002 20:14:06.005293   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:06.005679   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:06.005733   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:06.505561   32280 type.go:168] "Request Body" body=""
	I1002 20:14:06.505662   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:06.505997   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:07.005758   32280 type.go:168] "Request Body" body=""
	I1002 20:14:07.005865   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:07.006186   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:07.504924   32280 type.go:168] "Request Body" body=""
	I1002 20:14:07.504999   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:07.505319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:08.005020   32280 type.go:168] "Request Body" body=""
	I1002 20:14:08.005110   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:08.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:08.505144   32280 type.go:168] "Request Body" body=""
	I1002 20:14:08.505221   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:08.505546   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:08.505597   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:09.005324   32280 type.go:168] "Request Body" body=""
	I1002 20:14:09.005388   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:09.005759   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:09.505663   32280 type.go:168] "Request Body" body=""
	I1002 20:14:09.505738   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:09.506059   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:10.004831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:10.004913   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:10.005285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:10.504951   32280 type.go:168] "Request Body" body=""
	I1002 20:14:10.505047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:10.505396   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:11.005158   32280 type.go:168] "Request Body" body=""
	I1002 20:14:11.005275   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:11.005733   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:11.005797   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:11.505549   32280 type.go:168] "Request Body" body=""
	I1002 20:14:11.505697   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:11.506073   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:12.005903   32280 type.go:168] "Request Body" body=""
	I1002 20:14:12.005966   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:12.006268   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:12.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:14:12.505086   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:12.505427   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:13.004849   32280 type.go:168] "Request Body" body=""
	I1002 20:14:13.004968   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:13.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:13.505032   32280 type.go:168] "Request Body" body=""
	I1002 20:14:13.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:13.505438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:13.505493   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:14.005138   32280 type.go:168] "Request Body" body=""
	I1002 20:14:14.005202   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:14.005533   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:14.505306   32280 type.go:168] "Request Body" body=""
	I1002 20:14:14.505402   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:14.505762   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:15.005543   32280 type.go:168] "Request Body" body=""
	I1002 20:14:15.005604   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:15.005962   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:15.505741   32280 type.go:168] "Request Body" body=""
	I1002 20:14:15.505841   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:15.506168   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:15.506245   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:16.005122   32280 type.go:168] "Request Body" body=""
	I1002 20:14:16.005232   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:16.005696   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:16.504984   32280 type.go:168] "Request Body" body=""
	I1002 20:14:16.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:16.505370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:17.004896   32280 type.go:168] "Request Body" body=""
	I1002 20:14:17.004982   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:17.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:17.504836   32280 type.go:168] "Request Body" body=""
	I1002 20:14:17.504907   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:17.505220   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:18.005868   32280 type.go:168] "Request Body" body=""
	I1002 20:14:18.005951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:18.006358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:18.006423   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:18.504940   32280 type.go:168] "Request Body" body=""
	I1002 20:14:18.505026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:18.505333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:19.004866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:19.004945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:19.005275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:19.505078   32280 type.go:168] "Request Body" body=""
	I1002 20:14:19.505155   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:19.505483   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:20.004994   32280 type.go:168] "Request Body" body=""
	I1002 20:14:20.005076   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:20.005381   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:20.505242   32280 type.go:168] "Request Body" body=""
	I1002 20:14:20.505307   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:20.505631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:20.505718   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:21.005226   32280 type.go:168] "Request Body" body=""
	I1002 20:14:21.005289   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:21.005590   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:21.505335   32280 type.go:168] "Request Body" body=""
	I1002 20:14:21.505404   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:21.505749   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:22.005375   32280 type.go:168] "Request Body" body=""
	I1002 20:14:22.005439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:22.005744   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:22.505304   32280 type.go:168] "Request Body" body=""
	I1002 20:14:22.505371   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:22.505716   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:22.505771   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:23.005272   32280 type.go:168] "Request Body" body=""
	I1002 20:14:23.005334   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:23.005644   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:23.505227   32280 type.go:168] "Request Body" body=""
	I1002 20:14:23.505324   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:23.505721   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:24.005280   32280 type.go:168] "Request Body" body=""
	I1002 20:14:24.005348   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:24.005690   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:24.505614   32280 type.go:168] "Request Body" body=""
	I1002 20:14:24.505707   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:24.506064   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:24.506123   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:25.005722   32280 type.go:168] "Request Body" body=""
	I1002 20:14:25.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:25.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:25.505754   32280 type.go:168] "Request Body" body=""
	I1002 20:14:25.505821   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:25.506147   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:26.005768   32280 type.go:168] "Request Body" body=""
	I1002 20:14:26.005838   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:26.006153   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:26.505742   32280 type.go:168] "Request Body" body=""
	I1002 20:14:26.505810   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:26.506121   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:26.506173   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:27.005763   32280 type.go:168] "Request Body" body=""
	I1002 20:14:27.005839   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:27.006182   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:27.505814   32280 type.go:168] "Request Body" body=""
	I1002 20:14:27.505878   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:27.506202   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:28.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:28.005938   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:28.006243   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:28.504821   32280 type.go:168] "Request Body" body=""
	I1002 20:14:28.504889   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:28.505244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:29.005929   32280 type.go:168] "Request Body" body=""
	I1002 20:14:29.005998   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:29.006317   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:29.006373   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:29.505885   32280 type.go:168] "Request Body" body=""
	I1002 20:14:29.505955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:29.506284   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:30.004871   32280 type.go:168] "Request Body" body=""
	I1002 20:14:30.004946   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:30.005283   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:30.505131   32280 type.go:168] "Request Body" body=""
	I1002 20:14:30.505212   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:30.505536   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:31.005137   32280 type.go:168] "Request Body" body=""
	I1002 20:14:31.005230   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:31.005549   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:31.505115   32280 type.go:168] "Request Body" body=""
	I1002 20:14:31.505177   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:31.505493   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:31.505544   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:32.005077   32280 type.go:168] "Request Body" body=""
	I1002 20:14:32.005142   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:32.005447   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:32.505767   32280 type.go:168] "Request Body" body=""
	I1002 20:14:32.505835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:32.506138   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:33.005842   32280 type.go:168] "Request Body" body=""
	I1002 20:14:33.005927   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:33.006231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:33.505868   32280 type.go:168] "Request Body" body=""
	I1002 20:14:33.505947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:33.506252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:33.506315   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:34.004818   32280 type.go:168] "Request Body" body=""
	I1002 20:14:34.004919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:34.005210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:34.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:14:34.505008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:34.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:35.004949   32280 type.go:168] "Request Body" body=""
	I1002 20:14:35.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:35.005319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:35.505837   32280 type.go:168] "Request Body" body=""
	I1002 20:14:35.505935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:35.506248   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:36.005867   32280 type.go:168] "Request Body" body=""
	I1002 20:14:36.005936   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:36.006232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:36.006283   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:36.505902   32280 type.go:168] "Request Body" body=""
	I1002 20:14:36.506056   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:36.506384   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:37.004951   32280 type.go:168] "Request Body" body=""
	I1002 20:14:37.005021   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:37.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:37.504906   32280 type.go:168] "Request Body" body=""
	I1002 20:14:37.504995   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:37.505334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:38.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:14:38.004944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:38.005255   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:38.504831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:38.504917   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:38.505277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:38.505331   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:39.004819   32280 type.go:168] "Request Body" body=""
	I1002 20:14:39.004911   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:39.005204   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:39.505017   32280 type.go:168] "Request Body" body=""
	I1002 20:14:39.505087   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:39.505399   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:40.005080   32280 type.go:168] "Request Body" body=""
	I1002 20:14:40.005144   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:40.005445   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:40.505248   32280 type.go:168] "Request Body" body=""
	I1002 20:14:40.505310   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:40.505614   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:40.505711   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:41.005196   32280 type.go:168] "Request Body" body=""
	I1002 20:14:41.005309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:41.005631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:41.505223   32280 type.go:168] "Request Body" body=""
	I1002 20:14:41.505304   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:41.505623   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:42.005154   32280 type.go:168] "Request Body" body=""
	I1002 20:14:42.005238   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:42.005535   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:42.505095   32280 type.go:168] "Request Body" body=""
	I1002 20:14:42.505175   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:42.505514   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:43.005064   32280 type.go:168] "Request Body" body=""
	I1002 20:14:43.005128   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:43.005441   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:43.005493   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:43.504991   32280 type.go:168] "Request Body" body=""
	I1002 20:14:43.505079   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:43.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:44.004948   32280 type.go:168] "Request Body" body=""
	I1002 20:14:44.005018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:44.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:44.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:14:44.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:44.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:45.004946   32280 type.go:168] "Request Body" body=""
	I1002 20:14:45.005005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:45.005307   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:45.504859   32280 type.go:168] "Request Body" body=""
	I1002 20:14:45.504931   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:45.505245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:45.505309   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:46.005851   32280 type.go:168] "Request Body" body=""
	I1002 20:14:46.005934   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:46.006245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:46.505842   32280 type.go:168] "Request Body" body=""
	I1002 20:14:46.505929   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:46.506226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:47.005902   32280 type.go:168] "Request Body" body=""
	I1002 20:14:47.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:47.006270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:47.504848   32280 type.go:168] "Request Body" body=""
	I1002 20:14:47.504912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:47.505208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:48.005819   32280 type.go:168] "Request Body" body=""
	I1002 20:14:48.005910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:48.006200   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:48.006262   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:48.504839   32280 type.go:168] "Request Body" body=""
	I1002 20:14:48.504925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:48.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:49.004816   32280 type.go:168] "Request Body" body=""
	I1002 20:14:49.004911   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:49.005214   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:49.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:14:49.505022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:49.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:50.004888   32280 type.go:168] "Request Body" body=""
	I1002 20:14:50.004963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:50.005258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:50.505167   32280 type.go:168] "Request Body" body=""
	I1002 20:14:50.505271   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:50.505603   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:50.505700   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:51.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:51.005941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:51.006228   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:51.505859   32280 type.go:168] "Request Body" body=""
	I1002 20:14:51.505973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:51.506301   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:52.004831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:52.004912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:52.005216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:52.504814   32280 type.go:168] "Request Body" body=""
	I1002 20:14:52.504898   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:52.505216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:53.005826   32280 type.go:168] "Request Body" body=""
	I1002 20:14:53.005886   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:53.006180   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:53.006232   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:53.505812   32280 type.go:168] "Request Body" body=""
	I1002 20:14:53.505888   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:53.506201   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:54.005808   32280 type.go:168] "Request Body" body=""
	I1002 20:14:54.005871   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:54.006166   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:54.504871   32280 type.go:168] "Request Body" body=""
	I1002 20:14:54.504938   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:54.505247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:55.004807   32280 type.go:168] "Request Body" body=""
	I1002 20:14:55.004892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:55.005219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:55.505889   32280 type.go:168] "Request Body" body=""
	I1002 20:14:55.505973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:55.506277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:55.506339   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:56.004856   32280 type.go:168] "Request Body" body=""
	I1002 20:14:56.004932   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:56.005222   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:56.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:14:56.504963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:56.505264   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:57.004822   32280 type.go:168] "Request Body" body=""
	I1002 20:14:57.004940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:57.005238   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:57.505875   32280 type.go:168] "Request Body" body=""
	I1002 20:14:57.505940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:57.506273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:58.005858   32280 type.go:168] "Request Body" body=""
	I1002 20:14:58.005932   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:58.006233   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:58.006297   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:58.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:14:58.504910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:58.505221   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:59.005853   32280 type.go:168] "Request Body" body=""
	I1002 20:14:59.005916   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:59.006215   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:59.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:14:59.505079   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:59.505422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:00.005901   32280 type.go:168] "Request Body" body=""
	I1002 20:15:00.005989   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:00.006298   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:00.006348   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:00.505148   32280 type.go:168] "Request Body" body=""
	I1002 20:15:00.505241   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:00.505605   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:01.005169   32280 type.go:168] "Request Body" body=""
	I1002 20:15:01.005247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:01.005557   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:01.505254   32280 type.go:168] "Request Body" body=""
	I1002 20:15:01.505323   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:01.505705   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:02.004981   32280 type.go:168] "Request Body" body=""
	I1002 20:15:02.005068   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:02.005397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:02.505008   32280 type.go:168] "Request Body" body=""
	I1002 20:15:02.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:02.505394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:02.505450   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:03.004993   32280 type.go:168] "Request Body" body=""
	I1002 20:15:03.005058   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:03.005394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:03.504950   32280 type.go:168] "Request Body" body=""
	I1002 20:15:03.505020   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:03.505326   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:04.004918   32280 type.go:168] "Request Body" body=""
	I1002 20:15:04.004994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:04.005296   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:04.504973   32280 type.go:168] "Request Body" body=""
	I1002 20:15:04.505039   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:04.505347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:05.004936   32280 type.go:168] "Request Body" body=""
	I1002 20:15:05.005003   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:05.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:05.005362   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:05.504869   32280 type.go:168] "Request Body" body=""
	I1002 20:15:05.504948   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:05.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:06.004882   32280 type.go:168] "Request Body" body=""
	I1002 20:15:06.004968   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:06.005279   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:06.504947   32280 type.go:168] "Request Body" body=""
	I1002 20:15:06.505019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:06.505334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:07.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:15:07.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:07.005310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:07.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:15:07.505006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:07.505377   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:07.505433   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:08.004961   32280 type.go:168] "Request Body" body=""
	I1002 20:15:08.005028   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:08.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:08.504957   32280 type.go:168] "Request Body" body=""
	I1002 20:15:08.505046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:08.505358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:09.004947   32280 type.go:168] "Request Body" body=""
	I1002 20:15:09.005025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:09.005346   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:09.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:15:09.505247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:09.505575   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:09.505626   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:10.005155   32280 type.go:168] "Request Body" body=""
	I1002 20:15:10.005219   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:10.005531   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:10.505400   32280 type.go:168] "Request Body" body=""
	I1002 20:15:10.505469   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:10.505813   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:11.005490   32280 type.go:168] "Request Body" body=""
	I1002 20:15:11.005553   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:11.005896   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:11.505548   32280 type.go:168] "Request Body" body=""
	I1002 20:15:11.505612   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:11.505961   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:11.506027   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:12.005617   32280 type.go:168] "Request Body" body=""
	I1002 20:15:12.005691   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:12.005983   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:12.505700   32280 type.go:168] "Request Body" body=""
	I1002 20:15:12.505770   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:12.506098   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:13.005755   32280 type.go:168] "Request Body" body=""
	I1002 20:15:13.005828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:13.006168   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:13.505836   32280 type.go:168] "Request Body" body=""
	I1002 20:15:13.505920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:13.506241   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:13.506290   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:14.005887   32280 type.go:168] "Request Body" body=""
	I1002 20:15:14.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:14.006270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:14.505064   32280 type.go:168] "Request Body" body=""
	I1002 20:15:14.505129   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:14.505450   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:15.004995   32280 type.go:168] "Request Body" body=""
	I1002 20:15:15.005063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:15.005377   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:15.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:15:15.504986   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:15.505294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:16.004941   32280 type.go:168] "Request Body" body=""
	I1002 20:15:16.005008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:16.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:16.005376   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:16.504960   32280 type.go:168] "Request Body" body=""
	I1002 20:15:16.505033   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:16.505386   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:17.005033   32280 type.go:168] "Request Body" body=""
	I1002 20:15:17.005095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:17.005406   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:17.504971   32280 type.go:168] "Request Body" body=""
	I1002 20:15:17.505037   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:17.505351   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:18.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:15:18.005879   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:18.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:18.006247   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:18.505849   32280 type.go:168] "Request Body" body=""
	I1002 20:15:18.505919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:18.506247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:19.004886   32280 type.go:168] "Request Body" body=""
	I1002 20:15:19.004961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:19.005261   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:19.505076   32280 type.go:168] "Request Body" body=""
	I1002 20:15:19.505144   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:19.505477   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:20.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:15:20.005071   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:20.005381   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:20.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:15:20.505251   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:20.505582   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:20.505635   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:21.004962   32280 type.go:168] "Request Body" body=""
	I1002 20:15:21.005029   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:21.005332   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:21.504914   32280 type.go:168] "Request Body" body=""
	I1002 20:15:21.504977   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:21.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:22.004889   32280 type.go:168] "Request Body" body=""
	I1002 20:15:22.004987   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:22.005283   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:22.504874   32280 type.go:168] "Request Body" body=""
	I1002 20:15:22.504937   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:22.505267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:23.004838   32280 type.go:168] "Request Body" body=""
	I1002 20:15:23.004900   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:23.005227   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:23.005283   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:23.505836   32280 type.go:168] "Request Body" body=""
	I1002 20:15:23.505908   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:23.506231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:24.005841   32280 type.go:168] "Request Body" body=""
	I1002 20:15:24.005903   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:24.006198   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:24.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:15:24.505059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:24.505375   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:25.004926   32280 type.go:168] "Request Body" body=""
	I1002 20:15:25.005003   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:25.005304   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:25.005362   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:25.504905   32280 type.go:168] "Request Body" body=""
	I1002 20:15:25.504971   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:25.505275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:26.004817   32280 type.go:168] "Request Body" body=""
	I1002 20:15:26.004887   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:26.005210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:26.505879   32280 type.go:168] "Request Body" body=""
	I1002 20:15:26.506038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:26.506430   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:27.005027   32280 type.go:168] "Request Body" body=""
	I1002 20:15:27.005114   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:27.005415   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:27.005474   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:27.505002   32280 type.go:168] "Request Body" body=""
	I1002 20:15:27.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:27.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:28.004986   32280 type.go:168] "Request Body" body=""
	I1002 20:15:28.005053   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:28.005352   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:28.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:15:28.505000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:28.505364   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:29.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:15:29.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:29.005308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:29.505191   32280 type.go:168] "Request Body" body=""
	I1002 20:15:29.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:29.505637   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:29.505741   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:30.005210   32280 type.go:168] "Request Body" body=""
	I1002 20:15:30.005271   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:30.005562   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:30.505505   32280 type.go:168] "Request Body" body=""
	I1002 20:15:30.505575   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:30.505938   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:31.005554   32280 type.go:168] "Request Body" body=""
	I1002 20:15:31.005640   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:31.005967   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:31.505585   32280 type.go:168] "Request Body" body=""
	I1002 20:15:31.505683   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:31.506006   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:31.506056   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:32.005634   32280 type.go:168] "Request Body" body=""
	I1002 20:15:32.005710   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:32.006002   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:32.505666   32280 type.go:168] "Request Body" body=""
	I1002 20:15:32.505734   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:32.506032   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:33.005694   32280 type.go:168] "Request Body" body=""
	I1002 20:15:33.005768   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:33.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:33.505738   32280 type.go:168] "Request Body" body=""
	I1002 20:15:33.505801   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:33.506120   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:33.506192   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:34.005749   32280 type.go:168] "Request Body" body=""
	I1002 20:15:34.005835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:34.006190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:34.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:15:34.505063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:34.505359   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:35.004979   32280 type.go:168] "Request Body" body=""
	I1002 20:15:35.005040   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:35.005361   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:35.504958   32280 type.go:168] "Request Body" body=""
	I1002 20:15:35.505028   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:35.505325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:36.004893   32280 type.go:168] "Request Body" body=""
	I1002 20:15:36.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:36.005275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:36.005327   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:36.504861   32280 type.go:168] "Request Body" body=""
	I1002 20:15:36.504942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:36.505241   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:37.004818   32280 type.go:168] "Request Body" body=""
	I1002 20:15:37.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:37.005203   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:37.504876   32280 type.go:168] "Request Body" body=""
	I1002 20:15:37.504951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:37.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:38.004888   32280 type.go:168] "Request Body" body=""
	I1002 20:15:38.004979   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:38.005286   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:38.504969   32280 type.go:168] "Request Body" body=""
	I1002 20:15:38.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:38.505376   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:38.505429   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:39.004950   32280 type.go:168] "Request Body" body=""
	I1002 20:15:39.005018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:39.005330   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:39.505071   32280 type.go:168] "Request Body" body=""
	I1002 20:15:39.505137   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:39.505431   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:40.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:15:40.005090   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:40.005385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:40.505098   32280 type.go:168] "Request Body" body=""
	I1002 20:15:40.505197   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:40.505502   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:40.505558   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:41.005068   32280 type.go:168] "Request Body" body=""
	I1002 20:15:41.005132   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:41.005435   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:41.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:15:41.505067   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:41.505459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:42.005029   32280 type.go:168] "Request Body" body=""
	I1002 20:15:42.005101   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:42.005410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:42.505061   32280 type.go:168] "Request Body" body=""
	I1002 20:15:42.505128   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:42.505440   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:43.005053   32280 type.go:168] "Request Body" body=""
	I1002 20:15:43.005164   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:43.005534   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:43.005626   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:43.505101   32280 type.go:168] "Request Body" body=""
	I1002 20:15:43.505195   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:43.505496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:44.005084   32280 type.go:168] "Request Body" body=""
	I1002 20:15:44.005178   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:44.005496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:44.505460   32280 type.go:168] "Request Body" body=""
	I1002 20:15:44.505524   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:44.505855   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:45.005560   32280 type.go:168] "Request Body" body=""
	I1002 20:15:45.005631   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:45.005984   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:45.006035   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:45.505602   32280 type.go:168] "Request Body" body=""
	I1002 20:15:45.505705   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:45.506005   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:46.005627   32280 type.go:168] "Request Body" body=""
	I1002 20:15:46.005713   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:46.006024   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:46.505689   32280 type.go:168] "Request Body" body=""
	I1002 20:15:46.505755   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:46.506045   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:47.005272   32280 type.go:168] "Request Body" body=""
	I1002 20:15:47.005340   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:47.005666   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:47.505213   32280 type.go:168] "Request Body" body=""
	I1002 20:15:47.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:47.505638   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:47.505724   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:48.004992   32280 type.go:168] "Request Body" body=""
	I1002 20:15:48.005062   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:48.005371   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:48.504960   32280 type.go:168] "Request Body" body=""
	I1002 20:15:48.505025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:48.505343   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:49.004918   32280 type.go:168] "Request Body" body=""
	I1002 20:15:49.004982   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:49.005325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:49.505056   32280 type.go:168] "Request Body" body=""
	I1002 20:15:49.505122   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:49.505424   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:50.004984   32280 type.go:168] "Request Body" body=""
	I1002 20:15:50.005048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:50.005347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:50.005399   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:50.505099   32280 type.go:168] "Request Body" body=""
	I1002 20:15:50.505173   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:50.505478   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:51.005059   32280 type.go:168] "Request Body" body=""
	I1002 20:15:51.005133   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:51.005463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:51.505016   32280 type.go:168] "Request Body" body=""
	I1002 20:15:51.505084   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:51.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:52.005067   32280 type.go:168] "Request Body" body=""
	I1002 20:15:52.005155   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:52.005476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:52.005533   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:52.505040   32280 type.go:168] "Request Body" body=""
	I1002 20:15:52.505105   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:52.505403   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:53.004962   32280 type.go:168] "Request Body" body=""
	I1002 20:15:53.005025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:53.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:53.504924   32280 type.go:168] "Request Body" body=""
	I1002 20:15:53.505008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:53.505327   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:54.004900   32280 type.go:168] "Request Body" body=""
	I1002 20:15:54.004970   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:54.005314   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:54.505066   32280 type.go:168] "Request Body" body=""
	I1002 20:15:54.505137   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:54.505438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:54.505496   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:55.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:15:55.005067   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:55.005372   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:55.504901   32280 type.go:168] "Request Body" body=""
	I1002 20:15:55.504971   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:55.505282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:56.004915   32280 type.go:168] "Request Body" body=""
	I1002 20:15:56.004985   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:56.005314   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:56.504880   32280 type.go:168] "Request Body" body=""
	I1002 20:15:56.504955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:56.505267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:57.004835   32280 type.go:168] "Request Body" body=""
	I1002 20:15:57.004920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:57.005242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:57.005291   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:57.505877   32280 type.go:168] "Request Body" body=""
	I1002 20:15:57.505940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:57.506245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:58.005907   32280 type.go:168] "Request Body" body=""
	I1002 20:15:58.005991   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:58.006342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:58.504964   32280 type.go:168] "Request Body" body=""
	I1002 20:15:58.505032   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:58.505329   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:59.004907   32280 type.go:168] "Request Body" body=""
	I1002 20:15:59.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:59.005333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:59.005397   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:59.505208   32280 type.go:168] "Request Body" body=""
	I1002 20:15:59.505273   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:59.505578   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:00.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:16:00.005070   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:00.005368   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:00.505154   32280 type.go:168] "Request Body" body=""
	I1002 20:16:00.505223   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:00.505548   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:01.005111   32280 type.go:168] "Request Body" body=""
	I1002 20:16:01.005187   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:01.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:01.005546   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:01.505154   32280 type.go:168] "Request Body" body=""
	I1002 20:16:01.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:01.505529   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:02.005146   32280 type.go:168] "Request Body" body=""
	I1002 20:16:02.005224   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:02.005550   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:02.505113   32280 type.go:168] "Request Body" body=""
	I1002 20:16:02.505181   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:02.505501   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:03.005066   32280 type.go:168] "Request Body" body=""
	I1002 20:16:03.005132   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:03.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:03.505093   32280 type.go:168] "Request Body" body=""
	I1002 20:16:03.505162   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:03.505508   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:03.505564   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:04.005055   32280 type.go:168] "Request Body" body=""
	I1002 20:16:04.005119   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:04.005406   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:04.505180   32280 type.go:168] "Request Body" body=""
	I1002 20:16:04.505248   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:04.505566   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:05.005130   32280 type.go:168] "Request Body" body=""
	I1002 20:16:05.005192   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:05.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:05.505063   32280 type.go:168] "Request Body" body=""
	I1002 20:16:05.505131   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:05.505442   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:06.005022   32280 type.go:168] "Request Body" body=""
	I1002 20:16:06.005086   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:06.005392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:06.005444   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:06.505030   32280 type.go:168] "Request Body" body=""
	I1002 20:16:06.505095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:06.505395   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:07.004971   32280 type.go:168] "Request Body" body=""
	I1002 20:16:07.005038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:07.005337   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:07.504911   32280 type.go:168] "Request Body" body=""
	I1002 20:16:07.505004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:07.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:08.004917   32280 type.go:168] "Request Body" body=""
	I1002 20:16:08.004990   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:08.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:08.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:16:08.504958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:08.505256   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:08.505311   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:09.005884   32280 type.go:168] "Request Body" body=""
	I1002 20:16:09.005950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:09.006258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:09.505071   32280 type.go:168] "Request Body" body=""
	I1002 20:16:09.505141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:09.505485   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:10.005085   32280 type.go:168] "Request Body" body=""
	I1002 20:16:10.005150   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:10.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:10.505286   32280 type.go:168] "Request Body" body=""
	I1002 20:16:10.505357   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:10.505685   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:10.505751   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:11.005245   32280 type.go:168] "Request Body" body=""
	I1002 20:16:11.005311   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:11.005606   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:11.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:16:11.505245   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:11.505547   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:12.005105   32280 type.go:168] "Request Body" body=""
	I1002 20:16:12.005169   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:12.005459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:12.505029   32280 type.go:168] "Request Body" body=""
	I1002 20:16:12.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:12.505392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:13.005040   32280 type.go:168] "Request Body" body=""
	I1002 20:16:13.005104   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:13.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:13.005474   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:13.504990   32280 type.go:168] "Request Body" body=""
	I1002 20:16:13.505055   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:13.505357   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:14.004946   32280 type.go:168] "Request Body" body=""
	I1002 20:16:14.005015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:14.005324   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:14.505076   32280 type.go:168] "Request Body" body=""
	I1002 20:16:14.505142   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:14.505433   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:15.005063   32280 type.go:168] "Request Body" body=""
	I1002 20:16:15.005134   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:15.005446   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:15.005498   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:15.504952   32280 type.go:168] "Request Body" body=""
	I1002 20:16:15.505022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:15.505328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:16.004912   32280 type.go:168] "Request Body" body=""
	I1002 20:16:16.004990   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:16.005339   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:16.505464   32280 type.go:168] "Request Body" body=""
	I1002 20:16:16.505571   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:16.505963   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:17.005818   32280 type.go:168] "Request Body" body=""
	I1002 20:16:17.005930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:17.006240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:17.006295   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:17.504827   32280 type.go:168] "Request Body" body=""
	I1002 20:16:17.504891   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:17.505213   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:18.005877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:18.005946   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:18.006281   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:18.505257   32280 type.go:168] "Request Body" body=""
	I1002 20:16:18.505334   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:18.505711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:19.005252   32280 type.go:168] "Request Body" body=""
	I1002 20:16:19.005317   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:19.005634   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:19.505459   32280 type.go:168] "Request Body" body=""
	I1002 20:16:19.505521   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:19.505917   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:19.505979   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:20.005531   32280 type.go:168] "Request Body" body=""
	I1002 20:16:20.005594   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:20.005938   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:20.505740   32280 type.go:168] "Request Body" body=""
	I1002 20:16:20.505803   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:20.506120   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:21.005728   32280 type.go:168] "Request Body" body=""
	I1002 20:16:21.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:21.006134   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:21.505734   32280 type.go:168] "Request Body" body=""
	I1002 20:16:21.505799   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:21.506152   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:21.506214   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:22.005776   32280 type.go:168] "Request Body" body=""
	I1002 20:16:22.005835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:22.006129   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:22.505854   32280 type.go:168] "Request Body" body=""
	I1002 20:16:22.505921   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:22.506271   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:23.004819   32280 type.go:168] "Request Body" body=""
	I1002 20:16:23.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:23.005226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:23.504886   32280 type.go:168] "Request Body" body=""
	I1002 20:16:23.504953   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:23.505310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:24.004892   32280 type.go:168] "Request Body" body=""
	I1002 20:16:24.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:24.005258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:24.005327   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:24.505080   32280 type.go:168] "Request Body" body=""
	I1002 20:16:24.505161   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:24.505504   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:25.005053   32280 type.go:168] "Request Body" body=""
	I1002 20:16:25.005119   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:25.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:25.505026   32280 type.go:168] "Request Body" body=""
	I1002 20:16:25.505087   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:25.505410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:26.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:16:26.005021   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:26.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:26.005378   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:26.504910   32280 type.go:168] "Request Body" body=""
	I1002 20:16:26.504977   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:26.505326   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:27.005842   32280 type.go:168] "Request Body" body=""
	I1002 20:16:27.005906   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:27.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:27.505877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:27.505952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:27.506276   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:28.004832   32280 type.go:168] "Request Body" body=""
	I1002 20:16:28.004908   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:28.005212   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:28.505846   32280 type.go:168] "Request Body" body=""
	I1002 20:16:28.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:28.506279   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:28.506330   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:29.004829   32280 type.go:168] "Request Body" body=""
	I1002 20:16:29.004904   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:29.005217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:29.505056   32280 type.go:168] "Request Body" body=""
	I1002 20:16:29.505125   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:29.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:30.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:16:30.005075   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:30.005370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:30.505105   32280 type.go:168] "Request Body" body=""
	I1002 20:16:30.505170   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:30.505455   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:31.005091   32280 type.go:168] "Request Body" body=""
	I1002 20:16:31.005160   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:31.005463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:31.005521   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:31.504995   32280 type.go:168] "Request Body" body=""
	I1002 20:16:31.505061   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:31.505362   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:32.005845   32280 type.go:168] "Request Body" body=""
	I1002 20:16:32.005909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:32.006188   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:32.505814   32280 type.go:168] "Request Body" body=""
	I1002 20:16:32.505878   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:32.506185   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:33.005817   32280 type.go:168] "Request Body" body=""
	I1002 20:16:33.005884   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:33.006190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:33.006257   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:33.505816   32280 type.go:168] "Request Body" body=""
	I1002 20:16:33.505892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:33.506205   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:34.005835   32280 type.go:168] "Request Body" body=""
	I1002 20:16:34.005898   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:34.006219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:34.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:16:34.504996   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:34.505358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:35.004928   32280 type.go:168] "Request Body" body=""
	I1002 20:16:35.005004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:35.005345   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:35.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:16:35.504994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:35.505319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:35.505372   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:36.004925   32280 type.go:168] "Request Body" body=""
	I1002 20:16:36.004992   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:36.005316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:36.504877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:36.504954   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:36.505294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:37.005839   32280 type.go:168] "Request Body" body=""
	I1002 20:16:37.005910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:37.006248   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:37.505873   32280 type.go:168] "Request Body" body=""
	I1002 20:16:37.505941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:37.506266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:37.506318   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:38.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:16:38.005944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:38.006246   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:38.504902   32280 type.go:168] "Request Body" body=""
	I1002 20:16:38.504969   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:38.505303   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:39.004874   32280 type.go:168] "Request Body" body=""
	I1002 20:16:39.004947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:39.005260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:39.505046   32280 type.go:168] "Request Body" body=""
	I1002 20:16:39.505118   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:39.505463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:40.004989   32280 type.go:168] "Request Body" body=""
	I1002 20:16:40.005054   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:40.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:40.005393   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:40.505166   32280 type.go:168] "Request Body" body=""
	I1002 20:16:40.505235   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:40.505560   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:41.005152   32280 type.go:168] "Request Body" body=""
	I1002 20:16:41.005218   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:41.005554   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:41.505090   32280 type.go:168] "Request Body" body=""
	I1002 20:16:41.505158   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:41.505444   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:42.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:16:42.005138   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:42.005449   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:42.005504   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:42.505067   32280 type.go:168] "Request Body" body=""
	I1002 20:16:42.505134   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:42.505424   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:43.004978   32280 type.go:168] "Request Body" body=""
	I1002 20:16:43.005045   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:43.005360   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:43.504918   32280 type.go:168] "Request Body" body=""
	I1002 20:16:43.504994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:43.505315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:44.004897   32280 type.go:168] "Request Body" body=""
	I1002 20:16:44.004973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:44.005278   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:44.505052   32280 type.go:168] "Request Body" body=""
	I1002 20:16:44.505115   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:44.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:44.505478   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:45.004947   32280 type.go:168] "Request Body" body=""
	I1002 20:16:45.005019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:45.005322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:45.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:16:45.504993   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:45.505338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:46.004905   32280 type.go:168] "Request Body" body=""
	I1002 20:16:46.004979   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:46.005286   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:46.504835   32280 type.go:168] "Request Body" body=""
	I1002 20:16:46.504925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:46.505219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:47.005826   32280 type.go:168] "Request Body" body=""
	I1002 20:16:47.005892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:47.006200   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:47.006269   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:47.505816   32280 type.go:168] "Request Body" body=""
	I1002 20:16:47.505884   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:47.506197   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:48.005806   32280 type.go:168] "Request Body" body=""
	I1002 20:16:48.005870   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:48.006179   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:48.505827   32280 type.go:168] "Request Body" body=""
	I1002 20:16:48.505888   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:48.506194   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:49.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:16:49.005894   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:49.006203   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:49.504963   32280 type.go:168] "Request Body" body=""
	I1002 20:16:49.505034   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:49.505380   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:49.505431   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:50.004940   32280 type.go:168] "Request Body" body=""
	I1002 20:16:50.005017   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:50.005304   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:50.505134   32280 type.go:168] "Request Body" body=""
	I1002 20:16:50.505201   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:50.505531   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:51.005099   32280 type.go:168] "Request Body" body=""
	I1002 20:16:51.005174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:51.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:51.505049   32280 type.go:168] "Request Body" body=""
	I1002 20:16:51.505116   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:51.505426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:51.505479   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:52.005030   32280 type.go:168] "Request Body" body=""
	I1002 20:16:52.005095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:52.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:52.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:16:52.505051   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:52.505356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:53.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:16:53.005183   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:53.005527   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:53.504966   32280 type.go:168] "Request Body" body=""
	I1002 20:16:53.505048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:53.505395   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:54.004967   32280 type.go:168] "Request Body" body=""
	I1002 20:16:54.005059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:54.005352   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:54.005404   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:54.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:16:54.505052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:54.505382   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:55.005054   32280 type.go:168] "Request Body" body=""
	I1002 20:16:55.005127   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:55.005439   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:55.505031   32280 type.go:168] "Request Body" body=""
	I1002 20:16:55.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:55.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:56.004980   32280 type.go:168] "Request Body" body=""
	I1002 20:16:56.005046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:56.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:56.504959   32280 type.go:168] "Request Body" body=""
	I1002 20:16:56.505036   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:56.505388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:56.505446   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:57.004963   32280 type.go:168] "Request Body" body=""
	I1002 20:16:57.005036   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:57.005356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:57.504925   32280 type.go:168] "Request Body" body=""
	I1002 20:16:57.505000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:57.505300   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:58.004883   32280 type.go:168] "Request Body" body=""
	I1002 20:16:58.004958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:58.005266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:58.504830   32280 type.go:168] "Request Body" body=""
	I1002 20:16:58.504897   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:58.505217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:59.005867   32280 type.go:168] "Request Body" body=""
	I1002 20:16:59.005934   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:59.006232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:59.006289   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:59.505004   32280 type.go:168] "Request Body" body=""
	I1002 20:16:59.505077   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:59.505422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:00.004977   32280 type.go:168] "Request Body" body=""
	I1002 20:17:00.005041   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:00.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:00.505177   32280 type.go:168] "Request Body" body=""
	I1002 20:17:00.505244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:00.505577   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:01.005106   32280 type.go:168] "Request Body" body=""
	I1002 20:17:01.005177   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:01.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:01.505109   32280 type.go:168] "Request Body" body=""
	I1002 20:17:01.505191   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:01.505585   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:01.505680   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:02.005132   32280 type.go:168] "Request Body" body=""
	I1002 20:17:02.005199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:02.005526   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:02.505094   32280 type.go:168] "Request Body" body=""
	I1002 20:17:02.505168   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:02.505564   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:03.005060   32280 type.go:168] "Request Body" body=""
	I1002 20:17:03.005126   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:03.005440   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:03.504982   32280 type.go:168] "Request Body" body=""
	I1002 20:17:03.505055   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:03.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:04.004973   32280 type.go:168] "Request Body" body=""
	I1002 20:17:04.005038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:04.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:04.005404   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:04.505123   32280 type.go:168] "Request Body" body=""
	I1002 20:17:04.505251   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:04.505555   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:05.005089   32280 type.go:168] "Request Body" body=""
	I1002 20:17:05.005151   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:05.005451   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:05.505031   32280 type.go:168] "Request Body" body=""
	I1002 20:17:05.505104   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:05.505423   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:06.004950   32280 type.go:168] "Request Body" body=""
	I1002 20:17:06.005039   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:06.005333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:06.504958   32280 type.go:168] "Request Body" body=""
	I1002 20:17:06.505029   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:06.505369   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:06.505429   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:07.004923   32280 type.go:168] "Request Body" body=""
	I1002 20:17:07.004993   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:07.005301   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:07.504862   32280 type.go:168] "Request Body" body=""
	I1002 20:17:07.504930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:07.505255   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:08.004807   32280 type.go:168] "Request Body" body=""
	I1002 20:17:08.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:08.005186   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:08.505831   32280 type.go:168] "Request Body" body=""
	I1002 20:17:08.505899   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:08.506230   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:08.506299   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:09.005828   32280 type.go:168] "Request Body" body=""
	I1002 20:17:09.005891   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:09.006223   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:09.505024   32280 type.go:168] "Request Body" body=""
	I1002 20:17:09.505092   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:09.505459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:10.005013   32280 type.go:168] "Request Body" body=""
	I1002 20:17:10.005077   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:10.005388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:10.505140   32280 type.go:168] "Request Body" body=""
	I1002 20:17:10.505212   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:10.505598   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:11.005128   32280 type.go:168] "Request Body" body=""
	I1002 20:17:11.005195   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:11.005534   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:11.005597   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:11.505120   32280 type.go:168] "Request Body" body=""
	I1002 20:17:11.505189   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:11.505524   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:12.005153   32280 type.go:168] "Request Body" body=""
	I1002 20:17:12.005225   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:12.005562   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:12.505110   32280 type.go:168] "Request Body" body=""
	I1002 20:17:12.505174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:12.505532   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:13.005106   32280 type.go:168] "Request Body" body=""
	I1002 20:17:13.005174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:13.005476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:13.505007   32280 type.go:168] "Request Body" body=""
	I1002 20:17:13.505068   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:13.505435   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:13.505488   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:14.005005   32280 type.go:168] "Request Body" body=""
	I1002 20:17:14.005066   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:14.005383   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:14.505172   32280 type.go:168] "Request Body" body=""
	I1002 20:17:14.505244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:14.505573   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:15.005134   32280 type.go:168] "Request Body" body=""
	I1002 20:17:15.005205   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:15.005503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:15.505066   32280 type.go:168] "Request Body" body=""
	I1002 20:17:15.505141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:15.505446   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:15.505511   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:16.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:17:16.005080   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:16.005386   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:16.504935   32280 type.go:168] "Request Body" body=""
	I1002 20:17:16.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:16.505327   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:17.004855   32280 type.go:168] "Request Body" body=""
	I1002 20:17:17.004919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:17.005223   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:17.505899   32280 type.go:168] "Request Body" body=""
	I1002 20:17:17.505967   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:17.506302   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:17.506357   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:18.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:18.004943   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:18.005245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:18.504839   32280 type.go:168] "Request Body" body=""
	I1002 20:17:18.504915   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:18.505232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:19.005865   32280 type.go:168] "Request Body" body=""
	I1002 20:17:19.005947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:19.006269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:19.505022   32280 type.go:168] "Request Body" body=""
	I1002 20:17:19.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:19.505407   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:20.004991   32280 type.go:168] "Request Body" body=""
	I1002 20:17:20.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:20.005405   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:20.005466   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:20.505228   32280 type.go:168] "Request Body" body=""
	I1002 20:17:20.505297   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:20.505591   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:21.005210   32280 type.go:168] "Request Body" body=""
	I1002 20:17:21.005276   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:21.005584   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:21.505147   32280 type.go:168] "Request Body" body=""
	I1002 20:17:21.505208   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:21.505526   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:22.005059   32280 type.go:168] "Request Body" body=""
	I1002 20:17:22.005124   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:22.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:22.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:17:22.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:22.505347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:22.505407   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:23.004930   32280 type.go:168] "Request Body" body=""
	I1002 20:17:23.005006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:23.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:23.504881   32280 type.go:168] "Request Body" body=""
	I1002 20:17:23.504945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:23.505245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:24.005892   32280 type.go:168] "Request Body" body=""
	I1002 20:17:24.005969   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:24.006315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:24.505033   32280 type.go:168] "Request Body" body=""
	I1002 20:17:24.505105   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:24.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:24.505472   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:25.004948   32280 type.go:168] "Request Body" body=""
	I1002 20:17:25.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:25.005380   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:25.504947   32280 type.go:168] "Request Body" body=""
	I1002 20:17:25.505016   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:25.505308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:26.004843   32280 type.go:168] "Request Body" body=""
	I1002 20:17:26.004909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:26.005238   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:26.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:17:26.504873   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:26.505173   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:27.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:17:27.005931   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:27.006247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:27.006305   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:27.505850   32280 type.go:168] "Request Body" body=""
	I1002 20:17:27.505914   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:27.506242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:28.004933   32280 type.go:168] "Request Body" body=""
	I1002 20:17:28.005009   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:28.005342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:28.504866   32280 type.go:168] "Request Body" body=""
	I1002 20:17:28.505005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:28.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:29.004896   32280 type.go:168] "Request Body" body=""
	I1002 20:17:29.004966   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:29.005261   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:29.505004   32280 type.go:168] "Request Body" body=""
	I1002 20:17:29.505069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:29.505365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:29.505422   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:30.004927   32280 type.go:168] "Request Body" body=""
	I1002 20:17:30.004988   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:30.005290   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:30.504959   32280 type.go:168] "Request Body" body=""
	I1002 20:17:30.505027   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:30.505340   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:31.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:31.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:31.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:31.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:17:31.504956   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:31.505260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:32.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:32.004950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:32.005251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:32.005312   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:32.505895   32280 type.go:168] "Request Body" body=""
	I1002 20:17:32.505961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:32.506274   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:33.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:33.004958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:33.005280   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:33.504821   32280 type.go:168] "Request Body" body=""
	I1002 20:17:33.504892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:33.505232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:34.005931   32280 type.go:168] "Request Body" body=""
	I1002 20:17:34.006061   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:34.006376   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:34.006427   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:34.505046   32280 type.go:168] "Request Body" body=""
	I1002 20:17:34.505112   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:34.505397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:35.004981   32280 type.go:168] "Request Body" body=""
	I1002 20:17:35.005045   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:35.005370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:35.504929   32280 type.go:168] "Request Body" body=""
	I1002 20:17:35.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:35.505318   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:36.004980   32280 type.go:168] "Request Body" body=""
	I1002 20:17:36.005058   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:36.005394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:36.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:17:36.505060   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:36.505342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:36.505398   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:37.004903   32280 type.go:168] "Request Body" body=""
	I1002 20:17:37.004978   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:37.005282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:37.504878   32280 type.go:168] "Request Body" body=""
	I1002 20:17:37.504942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:37.505231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:38.005855   32280 type.go:168] "Request Body" body=""
	I1002 20:17:38.005918   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:38.006208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:38.505835   32280 type.go:168] "Request Body" body=""
	I1002 20:17:38.505904   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:38.506229   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:38.506296   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:39.004853   32280 type.go:168] "Request Body" body=""
	I1002 20:17:39.004944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:39.005263   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:39.505135   32280 type.go:168] "Request Body" body=""
	I1002 20:17:39.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:39.505615   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:40.005193   32280 type.go:168] "Request Body" body=""
	I1002 20:17:40.005282   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:40.005581   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:40.505135   32280 type.go:168] "Request Body" body=""
	I1002 20:17:40.505207   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:40.505537   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:41.005103   32280 type.go:168] "Request Body" body=""
	I1002 20:17:41.005165   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:41.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:41.005563   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:41.505063   32280 type.go:168] "Request Body" body=""
	I1002 20:17:41.505150   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:41.505490   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:42.005054   32280 type.go:168] "Request Body" body=""
	I1002 20:17:42.005160   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:42.005471   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:42.505019   32280 type.go:168] "Request Body" body=""
	I1002 20:17:42.505084   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:42.505402   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:43.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:43.005022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:43.005350   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:43.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:17:43.505007   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:43.505339   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:43.505393   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:44.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:17:44.005006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:44.005323   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:44.505096   32280 type.go:168] "Request Body" body=""
	I1002 20:17:44.505171   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:44.505478   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:45.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:17:45.005090   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:45.005399   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:45.504952   32280 type.go:168] "Request Body" body=""
	I1002 20:17:45.505012   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:45.505310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:46.004864   32280 type.go:168] "Request Body" body=""
	I1002 20:17:46.004951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:46.005294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:46.005355   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:46.504873   32280 type.go:168] "Request Body" body=""
	I1002 20:17:46.504940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:46.505244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:47.005848   32280 type.go:168] "Request Body" body=""
	I1002 20:17:47.005930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:47.006252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:47.504816   32280 type.go:168] "Request Body" body=""
	I1002 20:17:47.504905   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:47.505215   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:48.005846   32280 type.go:168] "Request Body" body=""
	I1002 20:17:48.005933   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:48.006242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:48.006300   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:48.505916   32280 type.go:168] "Request Body" body=""
	I1002 20:17:48.505980   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:48.506270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:49.004828   32280 type.go:168] "Request Body" body=""
	I1002 20:17:49.004910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:49.005240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:49.504935   32280 type.go:168] "Request Body" body=""
	I1002 20:17:49.505024   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:49.505373   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:50.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:50.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:50.005340   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:50.505078   32280 type.go:168] "Request Body" body=""
	I1002 20:17:50.505147   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:50.505479   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:50.505532   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:51.005024   32280 type.go:168] "Request Body" body=""
	I1002 20:17:51.005103   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:51.005420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:51.504998   32280 type.go:168] "Request Body" body=""
	I1002 20:17:51.505075   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:51.505410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:52.005000   32280 type.go:168] "Request Body" body=""
	I1002 20:17:52.005081   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:52.005428   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:52.505012   32280 type.go:168] "Request Body" body=""
	I1002 20:17:52.505100   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:52.505419   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:53.005015   32280 type.go:168] "Request Body" body=""
	I1002 20:17:53.005100   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:53.005438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:53.005495   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:53.504988   32280 type.go:168] "Request Body" body=""
	I1002 20:17:53.505059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:53.505385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:54.004971   32280 type.go:168] "Request Body" body=""
	I1002 20:17:54.005048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:54.005388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:54.505199   32280 type.go:168] "Request Body" body=""
	I1002 20:17:54.505286   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:54.505624   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:55.005199   32280 type.go:168] "Request Body" body=""
	I1002 20:17:55.005287   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:55.005639   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:55.005734   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:55.505238   32280 type.go:168] "Request Body" body=""
	I1002 20:17:55.505303   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:55.505621   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:56.005174   32280 type.go:168] "Request Body" body=""
	I1002 20:17:56.005258   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:56.005612   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:56.505166   32280 type.go:168] "Request Body" body=""
	I1002 20:17:56.505231   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:56.505523   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:57.005076   32280 type.go:168] "Request Body" body=""
	I1002 20:17:57.005156   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:57.005631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:57.505096   32280 type.go:168] "Request Body" body=""
	I1002 20:17:57.505167   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:57.505488   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:57.505554   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:58.005160   32280 type.go:168] "Request Body" body=""
	I1002 20:17:58.005227   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:58.005552   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:58.505084   32280 type.go:168] "Request Body" body=""
	I1002 20:17:58.505166   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:58.505512   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:59.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:17:59.005138   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:59.005430   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:59.505390   32280 type.go:168] "Request Body" body=""
	I1002 20:17:59.505459   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:59.505823   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:59.505890   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:00.005468   32280 type.go:168] "Request Body" body=""
	I1002 20:18:00.005540   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:00.005877   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:00.505768   32280 type.go:168] "Request Body" body=""
	I1002 20:18:00.505843   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:00.506177   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:01.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:18:01.005945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:01.006334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:01.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:18:01.504996   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:01.505321   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:02.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:18:02.005017   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:02.005334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:02.005385   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:02.504884   32280 type.go:168] "Request Body" body=""
	I1002 20:18:02.504963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:02.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:03.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:18:03.005005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:03.005356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:03.504932   32280 type.go:168] "Request Body" body=""
	I1002 20:18:03.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:03.505307   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:04.004878   32280 type.go:168] "Request Body" body=""
	I1002 20:18:04.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:04.005291   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:04.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:18:04.505131   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:04.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:04.505520   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:05.005008   32280 type.go:168] "Request Body" body=""
	I1002 20:18:05.005088   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:05.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:05.504977   32280 type.go:168] "Request Body" body=""
	I1002 20:18:05.505046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:05.505355   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:06.004890   32280 type.go:168] "Request Body" body=""
	I1002 20:18:06.004955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:06.005271   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:06.505878   32280 type.go:168] "Request Body" body=""
	I1002 20:18:06.505942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:06.506244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:06.506297   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:07.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:18:07.005943   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:07.006253   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:07.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:18:07.504964   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:07.505251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:08.004916   32280 type.go:168] "Request Body" body=""
	I1002 20:18:08.004981   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:08.005306   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:08.504856   32280 type.go:168] "Request Body" body=""
	I1002 20:18:08.504941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:08.505239   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:09.005880   32280 type.go:168] "Request Body" body=""
	I1002 20:18:09.005952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:09.006285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:09.006339   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:09.505080   32280 type.go:168] "Request Body" body=""
	I1002 20:18:09.505146   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:09.505447   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:10.005082   32280 type.go:168] "Request Body" body=""
	I1002 20:18:10.005147   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:10.005473   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:10.505242   32280 type.go:168] "Request Body" body=""
	I1002 20:18:10.505307   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:10.505606   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:11.005169   32280 type.go:168] "Request Body" body=""
	I1002 20:18:11.005243   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:11.005570   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:11.505121   32280 type.go:168] "Request Body" body=""
	I1002 20:18:11.505186   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:11.505487   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:11.505538   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:12.005071   32280 type.go:168] "Request Body" body=""
	I1002 20:18:12.005141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:12.005461   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:12.505817   32280 type.go:168] "Request Body" body=""
	I1002 20:18:12.505883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:12.506177   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:13.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:18:13.005887   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:13.006211   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:13.505873   32280 type.go:168] "Request Body" body=""
	I1002 20:18:13.505942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:13.506236   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:13.506287   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:14.004813   32280 type.go:168] "Request Body" body=""
	I1002 20:18:14.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:14.005208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:14.505838   32280 type.go:168] "Request Body" body=""
	I1002 20:18:14.505912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:14.506225   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:15.005871   32280 type.go:168] "Request Body" body=""
	I1002 20:18:15.005949   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:15.006278   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:15.504830   32280 type.go:168] "Request Body" body=""
	I1002 20:18:15.504900   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:15.505190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:16.004845   32280 type.go:168] "Request Body" body=""
	I1002 20:18:16.004935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:16.005267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:16.005321   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:16.504844   32280 type.go:168] "Request Body" body=""
	I1002 20:18:16.504915   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:16.505208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:17.004848   32280 type.go:168] "Request Body" body=""
	I1002 20:18:17.005199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:17.005523   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:17.505033   32280 type.go:168] "Request Body" body=""
	I1002 20:18:17.505107   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:17.505434   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:18.004982   32280 type.go:168] "Request Body" body=""
	I1002 20:18:18.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:18.005443   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:18.005498   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:18.505161   32280 type.go:168] "Request Body" body=""
	I1002 20:18:18.505228   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:18.505530   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:19.005238   32280 type.go:168] "Request Body" body=""
	I1002 20:18:19.005302   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:19.005626   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:19.505401   32280 type.go:168] "Request Body" body=""
	I1002 20:18:19.505466   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:19.505798   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:20.005591   32280 type.go:168] "Request Body" body=""
	I1002 20:18:20.005673   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:20.006000   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:20.006051   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:20.505823   32280 type.go:168] "Request Body" body=""
	I1002 20:18:20.505886   32280 node_ready.go:38] duration metric: took 6m0.001160736s for node "functional-753218" to be "Ready" ...
	I1002 20:18:20.508034   32280 out.go:203] 
	W1002 20:18:20.509328   32280 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:18:20.509341   32280 out.go:285] * 
	W1002 20:18:20.511008   32280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:18:20.512144   32280 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.349389136Z" level=info msg="createCtr: removing container d8a2e4886e59a5763e357c59eb0ae7ac013d8ca2bfe6e431c5c1f6bc3ee79896" id=cb98d186-791f-4fe9-8927-8ea0b105f661 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.349425042Z" level=info msg="createCtr: deleting container d8a2e4886e59a5763e357c59eb0ae7ac013d8ca2bfe6e431c5c1f6bc3ee79896 from storage" id=cb98d186-791f-4fe9-8927-8ea0b105f661 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.352229148Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753218_kube-system_91f2b96cb3e3f380ded17f30e8d873bd_0" id=18325a5c-d189-43e2-a8f5-039b6780aeb2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.352676709Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_8f4d4ea1035e2535a9c472062bfdd7f7_0" id=cb98d186-791f-4fe9-8927-8ea0b105f661 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.579454847Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a3e2887a-6b09-4204-8e87-28529019cb15 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.57958345Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a3e2887a-6b09-4204-8e87-28529019cb15 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.579624227Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a3e2887a-6b09-4204-8e87-28529019cb15 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.602981263Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=93dfff73-bd26-4d79-8160-b58f90868992 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.603116325Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=93dfff73-bd26-4d79-8160-b58f90868992 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.603199314Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=93dfff73-bd26-4d79-8160-b58f90868992 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.627047166Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=603ae9c4-b014-4e6b-9625-95255fb541a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.627191214Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=603ae9c4-b014-4e6b-9625-95255fb541a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.627226617Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=603ae9c4-b014-4e6b-9625-95255fb541a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.059447852Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=868c30ac-55cd-4028-b41d-22cc3439b9eb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.313718275Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=09cf8f43-c411-4182-bf84-97ffb0d81e59 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.314581091Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=849bac91-7cc7-4cf5-bf85-a2b1e1b06303 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.315468648Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-753218/kube-scheduler" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.31575443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.319060123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.319462218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.342136538Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.343860435Z" level=info msg="createCtr: deleting container ID 262457ea237ebad471b5ae976b91bbbdd55dcd0de930648457c28448315cf7af from idIndex" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.3439106Z" level=info msg="createCtr: removing container 262457ea237ebad471b5ae976b91bbbdd55dcd0de930648457c28448315cf7af" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.343967174Z" level=info msg="createCtr: deleting container 262457ea237ebad471b5ae976b91bbbdd55dcd0de930648457c28448315cf7af from storage" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.346765146Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753218_kube-system_b25a71e49a335bbe853872de1b1e3093_0" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:18:31.409253    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:31.409802    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:31.411325    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:31.411769    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:31.413254    5278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:18:31 up  1:00,  0 user,  load average: 0.41, 0.13, 0.09
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:18:28 functional-753218 kubelet[1799]: E1002 20:18:28.021399    1799 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-753218.186ac570b511e75f\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac570b511e75f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-753218 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:08:12.306458463 +0000 UTC m=+0.389053367,LastTimestamp:2025-10-02 20:08:12.307668719 +0000 UTC m=+0.390263643,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIn
stance:functional-753218,}"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.312925    1799 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.313003    1799 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.352456    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:18:29 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:29 functional-753218 kubelet[1799]:  > podSandboxID="65675f5fefd97e29be9e11728def45d5a2c472bac18f3ca682b57fda50e5abf7"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.352552    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:29 functional-753218 kubelet[1799]:         container etcd start failed in pod etcd-functional-753218_kube-system(91f2b96cb3e3f380ded17f30e8d873bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:29 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.352592    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753218" podUID="91f2b96cb3e3f380ded17f30e8d873bd"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.352911    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:18:29 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:29 functional-753218 kubelet[1799]:  > podSandboxID="055d32a868ccc672da5251b2017711a92949e7226757dee30bfd43e3d0b93077"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.353003    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:29 functional-753218 kubelet[1799]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(8f4d4ea1035e2535a9c472062bfdd7f7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:29 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.354056    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="8f4d4ea1035e2535a9c472062bfdd7f7"
	Oct 02 20:18:30 functional-753218 kubelet[1799]: E1002 20:18:30.313205    1799 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:18:30 functional-753218 kubelet[1799]: E1002 20:18:30.347061    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:18:30 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:30 functional-753218 kubelet[1799]:  > podSandboxID="de1cc60186f989d4e0a8994c95a3f2e5173970c97e595ad7db2d469e1551df14"
	Oct 02 20:18:30 functional-753218 kubelet[1799]: E1002 20:18:30.347182    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:30 functional-753218 kubelet[1799]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:30 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:30 functional-753218 kubelet[1799]: E1002 20:18:30.347221    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (291.280402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.01s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-753218 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-753218 get pods: exit status 1 (96.133369ms)

                                                
                                                
** stderr ** 
	E1002 20:18:32.258376   38613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:32.258759   38613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:32.260186   38613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:32.260466   38613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:18:32.261839   38613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-753218 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (277.522503ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-547008 --log_dir /tmp/nospam-547008 pause                                                              │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ delete  │ -p nospam-547008                                                                                              │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ start   │ -p functional-753218 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ -p functional-753218 --alsologtostderr -v=8                                                                   │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │                     │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:3.1                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:3.3                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:latest                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add minikube-local-cache-test:functional-753218                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache delete minikube-local-cache-test:functional-753218                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl images                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ cache   │ functional-753218 cache reload                                                                                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ kubectl │ functional-753218 kubectl -- --context functional-753218 get pods                                             │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:12:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:12:14.161053   32280 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:12:14.161314   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:12:14.161324   32280 out.go:374] Setting ErrFile to fd 2...
	I1002 20:12:14.161329   32280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:12:14.161525   32280 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:12:14.161965   32280 out.go:368] Setting JSON to false
	I1002 20:12:14.162918   32280 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3283,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:12:14.163001   32280 start.go:140] virtualization: kvm guest
	I1002 20:12:14.165258   32280 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:12:14.166596   32280 notify.go:221] Checking for updates...
	I1002 20:12:14.166661   32280 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:12:14.168151   32280 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:12:14.169781   32280 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:14.170964   32280 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:12:14.172159   32280 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:12:14.173393   32280 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:12:14.175005   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:14.175089   32280 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:12:14.198042   32280 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:12:14.198110   32280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:12:14.249812   32280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:12:14.240278836 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:12:14.249943   32280 docker.go:319] overlay module found
	I1002 20:12:14.251744   32280 out.go:179] * Using the docker driver based on existing profile
	I1002 20:12:14.252771   32280 start.go:306] selected driver: docker
	I1002 20:12:14.252788   32280 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:14.252894   32280 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:12:14.253012   32280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:12:14.302717   32280 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:12:14.29341416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:12:14.303277   32280 cni.go:84] Creating CNI manager for ""
	I1002 20:12:14.303332   32280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:12:14.303374   32280 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:14.305248   32280 out.go:179] * Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	I1002 20:12:14.306703   32280 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:12:14.308110   32280 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:12:14.309231   32280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:12:14.309270   32280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:12:14.309292   32280 cache.go:59] Caching tarball of preloaded images
	I1002 20:12:14.309321   32280 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:12:14.309392   32280 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:12:14.309404   32280 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:12:14.309506   32280 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:12:14.328595   32280 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:12:14.328612   32280 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:12:14.328641   32280 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:12:14.328685   32280 start.go:361] acquireMachinesLock for functional-753218: {Name:mk742badf6f1dbafca8397e398143e758831ae3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:12:14.328749   32280 start.go:365] duration metric: took 40.346µs to acquireMachinesLock for "functional-753218"
	I1002 20:12:14.328768   32280 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:12:14.328773   32280 fix.go:55] fixHost starting: 
	I1002 20:12:14.328978   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:14.345315   32280 fix.go:113] recreateIfNeeded on functional-753218: state=Running err=<nil>
	W1002 20:12:14.345339   32280 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:12:14.347103   32280 out.go:252] * Updating the running docker "functional-753218" container ...
	I1002 20:12:14.347127   32280 machine.go:93] provisionDockerMachine start ...
	I1002 20:12:14.347175   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.364778   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.365056   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.365071   32280 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:12:14.506481   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:12:14.506514   32280 ubuntu.go:182] provisioning hostname "functional-753218"
	I1002 20:12:14.506576   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.523646   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.523886   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.523904   32280 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753218 && echo "functional-753218" | sudo tee /etc/hostname
	I1002 20:12:14.674327   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:12:14.674412   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.691957   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:14.692191   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:14.692210   32280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753218/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:12:14.834109   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:12:14.834144   32280 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:12:14.834205   32280 ubuntu.go:190] setting up certificates
	I1002 20:12:14.834219   32280 provision.go:84] configureAuth start
	I1002 20:12:14.834287   32280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:12:14.852021   32280 provision.go:143] copyHostCerts
	I1002 20:12:14.852056   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:12:14.852091   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:12:14.852111   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:12:14.852184   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:12:14.852289   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:12:14.852315   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:12:14.852322   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:12:14.852367   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:12:14.852431   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:12:14.852454   32280 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:12:14.852460   32280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:12:14.852497   32280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:12:14.852565   32280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.functional-753218 san=[127.0.0.1 192.168.49.2 functional-753218 localhost minikube]
	I1002 20:12:14.908205   32280 provision.go:177] copyRemoteCerts
	I1002 20:12:14.908265   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:12:14.908316   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:14.925225   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.025356   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:12:15.025415   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:12:15.042012   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:12:15.042068   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:12:15.059080   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:12:15.059140   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:12:15.075501   32280 provision.go:87] duration metric: took 241.264617ms to configureAuth
	I1002 20:12:15.075530   32280 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:12:15.075723   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:15.075835   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.092499   32280 main.go:141] libmachine: Using SSH client type: native
	I1002 20:12:15.092718   32280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:12:15.092740   32280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:12:15.350871   32280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:12:15.350899   32280 machine.go:96] duration metric: took 1.003764785s to provisionDockerMachine
	I1002 20:12:15.350913   32280 start.go:294] postStartSetup for "functional-753218" (driver="docker")
	I1002 20:12:15.350926   32280 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:12:15.350976   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:12:15.351010   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.368192   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.468976   32280 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:12:15.472512   32280 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 20:12:15.472527   32280 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 20:12:15.472540   32280 command_runner.go:130] > VERSION_ID="12"
	I1002 20:12:15.472545   32280 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 20:12:15.472553   32280 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 20:12:15.472556   32280 command_runner.go:130] > ID=debian
	I1002 20:12:15.472560   32280 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 20:12:15.472565   32280 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 20:12:15.472572   32280 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 20:12:15.472618   32280 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:12:15.472635   32280 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:12:15.472666   32280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:12:15.472731   32280 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:12:15.472806   32280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:12:15.472815   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:12:15.472889   32280 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> hosts in /etc/test/nested/copy/12851
	I1002 20:12:15.472896   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> /etc/test/nested/copy/12851/hosts
	I1002 20:12:15.472925   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12851
	I1002 20:12:15.480384   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:12:15.496865   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts --> /etc/test/nested/copy/12851/hosts (40 bytes)
	I1002 20:12:15.513635   32280 start.go:297] duration metric: took 162.708522ms for postStartSetup
	I1002 20:12:15.513745   32280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:12:15.513794   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.530644   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.628445   32280 command_runner.go:130] > 39%
	I1002 20:12:15.628745   32280 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:12:15.633076   32280 command_runner.go:130] > 179G
	I1002 20:12:15.633306   32280 fix.go:57] duration metric: took 1.304525715s for fixHost
	I1002 20:12:15.633325   32280 start.go:84] releasing machines lock for "functional-753218", held for 1.30456494s
	I1002 20:12:15.633398   32280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:12:15.650579   32280 ssh_runner.go:195] Run: cat /version.json
	I1002 20:12:15.650618   32280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:12:15.650631   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.650688   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:15.668938   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.669107   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:15.765770   32280 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 20:12:15.817112   32280 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 20:12:15.819166   32280 ssh_runner.go:195] Run: systemctl --version
	I1002 20:12:15.825335   32280 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 20:12:15.825364   32280 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 20:12:15.825559   32280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:12:15.861701   32280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 20:12:15.866192   32280 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 20:12:15.866262   32280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:12:15.866323   32280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:12:15.874084   32280 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:12:15.874106   32280 start.go:496] detecting cgroup driver to use...
	I1002 20:12:15.874141   32280 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:12:15.874206   32280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:12:15.887803   32280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:12:15.899530   32280 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:12:15.899588   32280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:12:15.913378   32280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:12:15.925494   32280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:12:16.013036   32280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:12:16.099049   32280 docker.go:234] disabling docker service ...
	I1002 20:12:16.099135   32280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:12:16.112698   32280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:12:16.124592   32280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:12:16.212924   32280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:12:16.298302   32280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:12:16.310529   32280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:12:16.324186   32280 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 20:12:16.324212   32280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:12:16.324248   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.332999   32280 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:12:16.333067   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.341758   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.350162   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.358406   32280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:12:16.365887   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.374465   32280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.382513   32280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:16.390861   32280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:12:16.397800   32280 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 20:12:16.397864   32280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:12:16.404831   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:16.487603   32280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:12:19.404809   32280 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.917172928s)
	I1002 20:12:19.404840   32280 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:12:19.404889   32280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:12:19.408896   32280 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 20:12:19.408924   32280 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 20:12:19.408935   32280 command_runner.go:130] > Device: 0,59	Inode: 3843        Links: 1
	I1002 20:12:19.408947   32280 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:12:19.408956   32280 command_runner.go:130] > Access: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408964   32280 command_runner.go:130] > Modify: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408977   32280 command_runner.go:130] > Change: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.408989   32280 command_runner.go:130] >  Birth: 2025-10-02 20:12:19.387432116 +0000
	I1002 20:12:19.409044   32280 start.go:564] Will wait 60s for crictl version
	I1002 20:12:19.409092   32280 ssh_runner.go:195] Run: which crictl
	I1002 20:12:19.412689   32280 command_runner.go:130] > /usr/local/bin/crictl
	I1002 20:12:19.412744   32280 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:12:19.436957   32280 command_runner.go:130] > Version:  0.1.0
	I1002 20:12:19.436979   32280 command_runner.go:130] > RuntimeName:  cri-o
	I1002 20:12:19.436984   32280 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 20:12:19.436989   32280 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 20:12:19.437005   32280 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:12:19.437072   32280 ssh_runner.go:195] Run: crio --version
	I1002 20:12:19.464211   32280 command_runner.go:130] > crio version 1.34.1
	I1002 20:12:19.464228   32280 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:12:19.464234   32280 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:12:19.464240   32280 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:12:19.464244   32280 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:12:19.464248   32280 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:12:19.464252   32280 command_runner.go:130] >    Compiler:       gc
	I1002 20:12:19.464257   32280 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:12:19.464261   32280 command_runner.go:130] >    Linkmode:       static
	I1002 20:12:19.464264   32280 command_runner.go:130] >    BuildTags:
	I1002 20:12:19.464267   32280 command_runner.go:130] >      static
	I1002 20:12:19.464275   32280 command_runner.go:130] >      netgo
	I1002 20:12:19.464279   32280 command_runner.go:130] >      osusergo
	I1002 20:12:19.464283   32280 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:12:19.464288   32280 command_runner.go:130] >      seccomp
	I1002 20:12:19.464291   32280 command_runner.go:130] >      apparmor
	I1002 20:12:19.464298   32280 command_runner.go:130] >      selinux
	I1002 20:12:19.464302   32280 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:12:19.464306   32280 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:12:19.464310   32280 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:12:19.464385   32280 ssh_runner.go:195] Run: crio --version
	I1002 20:12:19.491564   32280 command_runner.go:130] > crio version 1.34.1
	I1002 20:12:19.491590   32280 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:12:19.491596   32280 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:12:19.491601   32280 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:12:19.491605   32280 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:12:19.491609   32280 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:12:19.491612   32280 command_runner.go:130] >    Compiler:       gc
	I1002 20:12:19.491619   32280 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:12:19.491623   32280 command_runner.go:130] >    Linkmode:       static
	I1002 20:12:19.491627   32280 command_runner.go:130] >    BuildTags:
	I1002 20:12:19.491630   32280 command_runner.go:130] >      static
	I1002 20:12:19.491634   32280 command_runner.go:130] >      netgo
	I1002 20:12:19.491637   32280 command_runner.go:130] >      osusergo
	I1002 20:12:19.491641   32280 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:12:19.491665   32280 command_runner.go:130] >      seccomp
	I1002 20:12:19.491671   32280 command_runner.go:130] >      apparmor
	I1002 20:12:19.491681   32280 command_runner.go:130] >      selinux
	I1002 20:12:19.491687   32280 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:12:19.491700   32280 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:12:19.491719   32280 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:12:19.493718   32280 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:12:19.495253   32280 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:12:19.512253   32280 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:12:19.516262   32280 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 20:12:19.516341   32280 kubeadm.go:883] updating cluster {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:12:19.516485   32280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:12:19.516543   32280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:12:19.546693   32280 command_runner.go:130] > {
	I1002 20:12:19.546715   32280 command_runner.go:130] >   "images":  [
	I1002 20:12:19.546721   32280 command_runner.go:130] >     {
	I1002 20:12:19.546728   32280 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:12:19.546732   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.546739   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:12:19.546745   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546774   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.546794   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:12:19.546808   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:12:19.546815   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546819   32280 command_runner.go:130] >       "size":  "109379124",
	I1002 20:12:19.546826   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.546835   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.546843   32280 command_runner.go:130] >     },
	I1002 20:12:19.546850   32280 command_runner.go:130] >     {
	I1002 20:12:19.546862   32280 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:12:19.546873   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.546881   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:12:19.546890   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546896   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.546909   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:12:19.546920   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:12:19.546937   32280 command_runner.go:130] >       ],
	I1002 20:12:19.546947   32280 command_runner.go:130] >       "size":  "31470524",
	I1002 20:12:19.546954   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.546966   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.546972   32280 command_runner.go:130] >     },
	I1002 20:12:19.546979   32280 command_runner.go:130] >     {
	I1002 20:12:19.546989   32280 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:12:19.547010   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547022   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:12:19.547032   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547039   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547053   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:12:19.547065   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:12:19.547073   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547080   32280 command_runner.go:130] >       "size":  "76103547",
	I1002 20:12:19.547087   32280 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:12:19.547091   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547094   32280 command_runner.go:130] >     },
	I1002 20:12:19.547100   32280 command_runner.go:130] >     {
	I1002 20:12:19.547113   32280 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:12:19.547119   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547129   32280 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:12:19.547135   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547144   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547154   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:12:19.547167   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:12:19.547176   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547182   32280 command_runner.go:130] >       "size":  "195976448",
	I1002 20:12:19.547187   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547192   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547201   32280 command_runner.go:130] >       },
	I1002 20:12:19.547217   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547228   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547233   32280 command_runner.go:130] >     },
	I1002 20:12:19.547242   32280 command_runner.go:130] >     {
	I1002 20:12:19.547252   32280 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:12:19.547261   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547269   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:12:19.547276   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547281   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547301   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:12:19.547316   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:12:19.547321   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547331   32280 command_runner.go:130] >       "size":  "89046001",
	I1002 20:12:19.547337   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547346   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547352   32280 command_runner.go:130] >       },
	I1002 20:12:19.547361   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547368   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547376   32280 command_runner.go:130] >     },
	I1002 20:12:19.547380   32280 command_runner.go:130] >     {
	I1002 20:12:19.547390   32280 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:12:19.547396   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547407   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:12:19.547413   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547423   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547435   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:12:19.547451   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:12:19.547459   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547466   32280 command_runner.go:130] >       "size":  "76004181",
	I1002 20:12:19.547474   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547480   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547489   32280 command_runner.go:130] >       },
	I1002 20:12:19.547495   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547507   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547512   32280 command_runner.go:130] >     },
	I1002 20:12:19.547517   32280 command_runner.go:130] >     {
	I1002 20:12:19.547527   32280 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:12:19.547534   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547541   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:12:19.547546   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547552   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547561   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:12:19.547582   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:12:19.547592   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547599   32280 command_runner.go:130] >       "size":  "73138073",
	I1002 20:12:19.547606   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547615   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547624   32280 command_runner.go:130] >     },
	I1002 20:12:19.547629   32280 command_runner.go:130] >     {
	I1002 20:12:19.547641   32280 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:12:19.547658   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547667   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:12:19.547673   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547683   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547693   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:12:19.547720   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:12:19.547729   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547733   32280 command_runner.go:130] >       "size":  "53844823",
	I1002 20:12:19.547737   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547743   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.547752   32280 command_runner.go:130] >       },
	I1002 20:12:19.547758   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547768   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.547775   32280 command_runner.go:130] >     },
	I1002 20:12:19.547782   32280 command_runner.go:130] >     {
	I1002 20:12:19.547794   32280 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:12:19.547804   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.547814   32280 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.547820   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547825   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.547839   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:12:19.547853   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:12:19.547861   32280 command_runner.go:130] >       ],
	I1002 20:12:19.547867   32280 command_runner.go:130] >       "size":  "742092",
	I1002 20:12:19.547876   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.547887   32280 command_runner.go:130] >         "value":  "65535"
	I1002 20:12:19.547894   32280 command_runner.go:130] >       },
	I1002 20:12:19.547900   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.547906   32280 command_runner.go:130] >       "pinned":  true
	I1002 20:12:19.547910   32280 command_runner.go:130] >     }
	I1002 20:12:19.547917   32280 command_runner.go:130] >   ]
	I1002 20:12:19.547924   32280 command_runner.go:130] > }
	I1002 20:12:19.548472   32280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:12:19.548485   32280 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:12:19.548524   32280 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:12:19.570809   32280 command_runner.go:130] > {
	I1002 20:12:19.570828   32280 command_runner.go:130] >   "images":  [
	I1002 20:12:19.570831   32280 command_runner.go:130] >     {
	I1002 20:12:19.570839   32280 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:12:19.570844   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.570849   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:12:19.570853   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570857   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.570864   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:12:19.570871   32280 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:12:19.570877   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570882   32280 command_runner.go:130] >       "size":  "109379124",
	I1002 20:12:19.570889   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.570902   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.570908   32280 command_runner.go:130] >     },
	I1002 20:12:19.570914   32280 command_runner.go:130] >     {
	I1002 20:12:19.570922   32280 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:12:19.570928   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.570932   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:12:19.570938   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570941   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.570948   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:12:19.570958   32280 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:12:19.570964   32280 command_runner.go:130] >       ],
	I1002 20:12:19.570971   32280 command_runner.go:130] >       "size":  "31470524",
	I1002 20:12:19.570976   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.570985   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.570990   32280 command_runner.go:130] >     },
	I1002 20:12:19.570993   32280 command_runner.go:130] >     {
	I1002 20:12:19.571001   32280 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:12:19.571005   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571012   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:12:19.571016   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571021   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571028   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:12:19.571037   32280 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:12:19.571043   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571047   32280 command_runner.go:130] >       "size":  "76103547",
	I1002 20:12:19.571050   32280 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:12:19.571056   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571059   32280 command_runner.go:130] >     },
	I1002 20:12:19.571065   32280 command_runner.go:130] >     {
	I1002 20:12:19.571071   32280 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:12:19.571077   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571081   32280 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:12:19.571087   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571091   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571099   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:12:19.571108   32280 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:12:19.571113   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571117   32280 command_runner.go:130] >       "size":  "195976448",
	I1002 20:12:19.571122   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571126   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571132   32280 command_runner.go:130] >       },
	I1002 20:12:19.571139   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571145   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571152   32280 command_runner.go:130] >     },
	I1002 20:12:19.571157   32280 command_runner.go:130] >     {
	I1002 20:12:19.571163   32280 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:12:19.571169   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571173   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:12:19.571179   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571183   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571192   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:12:19.571201   32280 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:12:19.571207   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571211   32280 command_runner.go:130] >       "size":  "89046001",
	I1002 20:12:19.571216   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571220   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571226   32280 command_runner.go:130] >       },
	I1002 20:12:19.571231   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571234   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571237   32280 command_runner.go:130] >     },
	I1002 20:12:19.571242   32280 command_runner.go:130] >     {
	I1002 20:12:19.571249   32280 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:12:19.571255   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571260   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:12:19.571265   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571269   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571276   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:12:19.571286   32280 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:12:19.571292   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571295   32280 command_runner.go:130] >       "size":  "76004181",
	I1002 20:12:19.571301   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571305   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571310   32280 command_runner.go:130] >       },
	I1002 20:12:19.571314   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571318   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571323   32280 command_runner.go:130] >     },
	I1002 20:12:19.571327   32280 command_runner.go:130] >     {
	I1002 20:12:19.571335   32280 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:12:19.571339   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571349   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:12:19.571355   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571359   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571367   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:12:19.571376   32280 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:12:19.571382   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571386   32280 command_runner.go:130] >       "size":  "73138073",
	I1002 20:12:19.571393   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571397   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571402   32280 command_runner.go:130] >     },
	I1002 20:12:19.571405   32280 command_runner.go:130] >     {
	I1002 20:12:19.571410   32280 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:12:19.571414   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571418   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:12:19.571422   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571425   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571431   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:12:19.571446   32280 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:12:19.571455   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571461   32280 command_runner.go:130] >       "size":  "53844823",
	I1002 20:12:19.571469   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571474   32280 command_runner.go:130] >         "value":  "0"
	I1002 20:12:19.571482   32280 command_runner.go:130] >       },
	I1002 20:12:19.571488   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571495   32280 command_runner.go:130] >       "pinned":  false
	I1002 20:12:19.571498   32280 command_runner.go:130] >     },
	I1002 20:12:19.571504   32280 command_runner.go:130] >     {
	I1002 20:12:19.571510   32280 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:12:19.571516   32280 command_runner.go:130] >       "repoTags":  [
	I1002 20:12:19.571520   32280 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.571526   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571530   32280 command_runner.go:130] >       "repoDigests":  [
	I1002 20:12:19.571542   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:12:19.571552   32280 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:12:19.571556   32280 command_runner.go:130] >       ],
	I1002 20:12:19.571562   32280 command_runner.go:130] >       "size":  "742092",
	I1002 20:12:19.571565   32280 command_runner.go:130] >       "uid":  {
	I1002 20:12:19.571571   32280 command_runner.go:130] >         "value":  "65535"
	I1002 20:12:19.571575   32280 command_runner.go:130] >       },
	I1002 20:12:19.571581   32280 command_runner.go:130] >       "username":  "",
	I1002 20:12:19.571585   32280 command_runner.go:130] >       "pinned":  true
	I1002 20:12:19.571590   32280 command_runner.go:130] >     }
	I1002 20:12:19.571593   32280 command_runner.go:130] >   ]
	I1002 20:12:19.571598   32280 command_runner.go:130] > }
	I1002 20:12:19.572597   32280 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:12:19.572614   32280 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:12:19.572621   32280 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:12:19.572734   32280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:12:19.572796   32280 ssh_runner.go:195] Run: crio config
	I1002 20:12:19.612615   32280 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 20:12:19.612638   32280 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 20:12:19.612664   32280 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 20:12:19.612669   32280 command_runner.go:130] > #
	I1002 20:12:19.612689   32280 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 20:12:19.612698   32280 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 20:12:19.612709   32280 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 20:12:19.612721   32280 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 20:12:19.612728   32280 command_runner.go:130] > # reload'.
	I1002 20:12:19.612738   32280 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 20:12:19.612748   32280 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 20:12:19.612758   32280 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 20:12:19.612768   32280 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 20:12:19.612773   32280 command_runner.go:130] > [crio]
	I1002 20:12:19.612785   32280 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 20:12:19.612796   32280 command_runner.go:130] > # containers images, in this directory.
	I1002 20:12:19.612808   32280 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 20:12:19.612821   32280 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 20:12:19.612828   32280 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 20:12:19.612841   32280 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 20:12:19.612855   32280 command_runner.go:130] > # imagestore = ""
	I1002 20:12:19.612864   32280 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 20:12:19.612878   32280 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 20:12:19.612885   32280 command_runner.go:130] > # storage_driver = "overlay"
	I1002 20:12:19.612895   32280 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 20:12:19.612905   32280 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 20:12:19.612914   32280 command_runner.go:130] > # storage_option = [
	I1002 20:12:19.612917   32280 command_runner.go:130] > # ]
	I1002 20:12:19.612923   32280 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 20:12:19.612931   32280 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 20:12:19.612941   32280 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 20:12:19.612950   32280 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 20:12:19.612959   32280 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 20:12:19.612970   32280 command_runner.go:130] > # always happen on a node reboot
	I1002 20:12:19.612977   32280 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 20:12:19.612994   32280 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 20:12:19.613004   32280 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 20:12:19.613009   32280 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 20:12:19.613016   32280 command_runner.go:130] > # version_file_persist = ""
	I1002 20:12:19.613025   32280 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 20:12:19.613033   32280 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 20:12:19.613041   32280 command_runner.go:130] > # internal_wipe = true
	I1002 20:12:19.613054   32280 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 20:12:19.613066   32280 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 20:12:19.613075   32280 command_runner.go:130] > # internal_repair = true
	I1002 20:12:19.613083   32280 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 20:12:19.613095   32280 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 20:12:19.613113   32280 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 20:12:19.613120   32280 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 20:12:19.613129   32280 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 20:12:19.613134   32280 command_runner.go:130] > [crio.api]
	I1002 20:12:19.613142   32280 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 20:12:19.613150   32280 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 20:12:19.613162   32280 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 20:12:19.613173   32280 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 20:12:19.613185   32280 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 20:12:19.613197   32280 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 20:12:19.613204   32280 command_runner.go:130] > # stream_port = "0"
	I1002 20:12:19.613213   32280 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 20:12:19.613222   32280 command_runner.go:130] > # stream_enable_tls = false
	I1002 20:12:19.613231   32280 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 20:12:19.613238   32280 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 20:12:19.613248   32280 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 20:12:19.613260   32280 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 20:12:19.613266   32280 command_runner.go:130] > # stream_tls_cert = ""
	I1002 20:12:19.613274   32280 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 20:12:19.613292   32280 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 20:12:19.613301   32280 command_runner.go:130] > # stream_tls_key = ""
	I1002 20:12:19.613309   32280 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 20:12:19.613323   32280 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 20:12:19.613331   32280 command_runner.go:130] > # automatically pick up the changes.
	I1002 20:12:19.613340   32280 command_runner.go:130] > # stream_tls_ca = ""
	I1002 20:12:19.613394   32280 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:12:19.613408   32280 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 20:12:19.613420   32280 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:12:19.613430   32280 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 20:12:19.613440   32280 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 20:12:19.613452   32280 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 20:12:19.613458   32280 command_runner.go:130] > [crio.runtime]
	I1002 20:12:19.613469   32280 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 20:12:19.613481   32280 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 20:12:19.613487   32280 command_runner.go:130] > # "nofile=1024:2048"
	I1002 20:12:19.613500   32280 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 20:12:19.613508   32280 command_runner.go:130] > # default_ulimits = [
	I1002 20:12:19.613514   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613526   32280 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 20:12:19.613532   32280 command_runner.go:130] > # no_pivot = false
	I1002 20:12:19.613543   32280 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 20:12:19.613554   32280 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 20:12:19.613564   32280 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 20:12:19.613573   32280 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 20:12:19.613584   32280 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 20:12:19.613594   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:12:19.613603   32280 command_runner.go:130] > # conmon = ""
	I1002 20:12:19.613611   32280 command_runner.go:130] > # Cgroup setting for conmon
	I1002 20:12:19.613625   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 20:12:19.613632   32280 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 20:12:19.613642   32280 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 20:12:19.613664   32280 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 20:12:19.613682   32280 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:12:19.613692   32280 command_runner.go:130] > # conmon_env = [
	I1002 20:12:19.613698   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613710   32280 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 20:12:19.613720   32280 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 20:12:19.613729   32280 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 20:12:19.613739   32280 command_runner.go:130] > # default_env = [
	I1002 20:12:19.613746   32280 command_runner.go:130] > # ]
	I1002 20:12:19.613758   32280 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 20:12:19.613769   32280 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 20:12:19.613778   32280 command_runner.go:130] > # selinux = false
	I1002 20:12:19.613788   32280 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 20:12:19.613803   32280 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 20:12:19.613814   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613822   32280 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:12:19.613835   32280 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 20:12:19.613846   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613852   32280 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 20:12:19.613865   32280 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 20:12:19.613878   32280 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 20:12:19.613890   32280 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 20:12:19.613899   32280 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 20:12:19.613908   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.613917   32280 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 20:12:19.613926   32280 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 20:12:19.613937   32280 command_runner.go:130] > # the cgroup blockio controller.
	I1002 20:12:19.613944   32280 command_runner.go:130] > # blockio_config_file = ""
	I1002 20:12:19.613958   32280 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 20:12:19.613965   32280 command_runner.go:130] > # blockio parameters.
	I1002 20:12:19.613974   32280 command_runner.go:130] > # blockio_reload = false
	I1002 20:12:19.613983   32280 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 20:12:19.613994   32280 command_runner.go:130] > # irqbalance daemon.
	I1002 20:12:19.614002   32280 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 20:12:19.614013   32280 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 20:12:19.614023   32280 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 20:12:19.614037   32280 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 20:12:19.614048   32280 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 20:12:19.614061   32280 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 20:12:19.614068   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.614077   32280 command_runner.go:130] > # rdt_config_file = ""
	I1002 20:12:19.614085   32280 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 20:12:19.614095   32280 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 20:12:19.614104   32280 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 20:12:19.614113   32280 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 20:12:19.614127   32280 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 20:12:19.614139   32280 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 20:12:19.614147   32280 command_runner.go:130] > # will be added.
	I1002 20:12:19.614155   32280 command_runner.go:130] > # default_capabilities = [
	I1002 20:12:19.614163   32280 command_runner.go:130] > # 	"CHOWN",
	I1002 20:12:19.614170   32280 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 20:12:19.614177   32280 command_runner.go:130] > # 	"FSETID",
	I1002 20:12:19.614182   32280 command_runner.go:130] > # 	"FOWNER",
	I1002 20:12:19.614187   32280 command_runner.go:130] > # 	"SETGID",
	I1002 20:12:19.614210   32280 command_runner.go:130] > # 	"SETUID",
	I1002 20:12:19.614214   32280 command_runner.go:130] > # 	"SETPCAP",
	I1002 20:12:19.614219   32280 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 20:12:19.614223   32280 command_runner.go:130] > # 	"KILL",
	I1002 20:12:19.614227   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614236   32280 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 20:12:19.614243   32280 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 20:12:19.614248   32280 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 20:12:19.614256   32280 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 20:12:19.614265   32280 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:12:19.614271   32280 command_runner.go:130] > default_sysctls = [
	I1002 20:12:19.614279   32280 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 20:12:19.614284   32280 command_runner.go:130] > ]
	I1002 20:12:19.614291   32280 command_runner.go:130] > # List of devices on the host that a
	I1002 20:12:19.614299   32280 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 20:12:19.614308   32280 command_runner.go:130] > # allowed_devices = [
	I1002 20:12:19.614313   32280 command_runner.go:130] > # 	"/dev/fuse",
	I1002 20:12:19.614321   32280 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 20:12:19.614327   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614335   32280 command_runner.go:130] > # List of additional devices. specified as
	I1002 20:12:19.614349   32280 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 20:12:19.614359   32280 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 20:12:19.614368   32280 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:12:19.614376   32280 command_runner.go:130] > # additional_devices = [
	I1002 20:12:19.614381   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614388   32280 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 20:12:19.614394   32280 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 20:12:19.614398   32280 command_runner.go:130] > # 	"/etc/cdi",
	I1002 20:12:19.614402   32280 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 20:12:19.614404   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614410   32280 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 20:12:19.614416   32280 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 20:12:19.614420   32280 command_runner.go:130] > # Defaults to false.
	I1002 20:12:19.614424   32280 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 20:12:19.614432   32280 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 20:12:19.614438   32280 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 20:12:19.614441   32280 command_runner.go:130] > # hooks_dir = [
	I1002 20:12:19.614445   32280 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 20:12:19.614449   32280 command_runner.go:130] > # ]
	I1002 20:12:19.614454   32280 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 20:12:19.614462   32280 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 20:12:19.614467   32280 command_runner.go:130] > # its default mounts from the following two files:
	I1002 20:12:19.614471   32280 command_runner.go:130] > #
	I1002 20:12:19.614476   32280 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 20:12:19.614484   32280 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 20:12:19.614489   32280 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 20:12:19.614494   32280 command_runner.go:130] > #
	I1002 20:12:19.614500   32280 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 20:12:19.614506   32280 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 20:12:19.614514   32280 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 20:12:19.614519   32280 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 20:12:19.614524   32280 command_runner.go:130] > #
	I1002 20:12:19.614528   32280 command_runner.go:130] > # default_mounts_file = ""
	I1002 20:12:19.614532   32280 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 20:12:19.614539   32280 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 20:12:19.614545   32280 command_runner.go:130] > # pids_limit = -1
	I1002 20:12:19.614551   32280 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 20:12:19.614559   32280 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 20:12:19.614564   32280 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 20:12:19.614572   32280 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 20:12:19.614578   32280 command_runner.go:130] > # log_size_max = -1
	I1002 20:12:19.614716   32280 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 20:12:19.614727   32280 command_runner.go:130] > # log_to_journald = false
	I1002 20:12:19.614733   32280 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 20:12:19.614738   32280 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 20:12:19.614745   32280 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 20:12:19.614750   32280 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 20:12:19.614757   32280 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 20:12:19.614761   32280 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 20:12:19.614766   32280 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 20:12:19.614772   32280 command_runner.go:130] > # read_only = false
	I1002 20:12:19.614777   32280 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 20:12:19.614785   32280 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 20:12:19.614789   32280 command_runner.go:130] > # live configuration reload.
	I1002 20:12:19.614795   32280 command_runner.go:130] > # log_level = "info"
	I1002 20:12:19.614800   32280 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 20:12:19.614807   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.614811   32280 command_runner.go:130] > # log_filter = ""
	I1002 20:12:19.614817   32280 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 20:12:19.614825   32280 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 20:12:19.614829   32280 command_runner.go:130] > # separated by comma.
	I1002 20:12:19.614839   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614846   32280 command_runner.go:130] > # uid_mappings = ""
	I1002 20:12:19.614851   32280 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 20:12:19.614859   32280 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 20:12:19.614863   32280 command_runner.go:130] > # separated by comma.
	I1002 20:12:19.614873   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614877   32280 command_runner.go:130] > # gid_mappings = ""
	I1002 20:12:19.614884   32280 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 20:12:19.614890   32280 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:12:19.614898   32280 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:12:19.614905   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614909   32280 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 20:12:19.614916   32280 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 20:12:19.614924   32280 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:12:19.614931   32280 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:12:19.614940   32280 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:12:19.614944   32280 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 20:12:19.614949   32280 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 20:12:19.614959   32280 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 20:12:19.614964   32280 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 20:12:19.614970   32280 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 20:12:19.614975   32280 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 20:12:19.614983   32280 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 20:12:19.614988   32280 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 20:12:19.614993   32280 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 20:12:19.614999   32280 command_runner.go:130] > # drop_infra_ctr = true
	I1002 20:12:19.615004   32280 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 20:12:19.615009   32280 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 20:12:19.615018   32280 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 20:12:19.615024   32280 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 20:12:19.615031   32280 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 20:12:19.615038   32280 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 20:12:19.615044   32280 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 20:12:19.615052   32280 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 20:12:19.615055   32280 command_runner.go:130] > # shared_cpuset = ""
	I1002 20:12:19.615063   32280 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 20:12:19.615068   32280 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 20:12:19.615073   32280 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 20:12:19.615080   32280 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 20:12:19.615086   32280 command_runner.go:130] > # pinns_path = ""
	I1002 20:12:19.615090   32280 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 20:12:19.615098   32280 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 20:12:19.615102   32280 command_runner.go:130] > # enable_criu_support = true
	I1002 20:12:19.615111   32280 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 20:12:19.615116   32280 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 20:12:19.615123   32280 command_runner.go:130] > # enable_pod_events = false
	I1002 20:12:19.615128   32280 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 20:12:19.615135   32280 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 20:12:19.615139   32280 command_runner.go:130] > # default_runtime = "crun"
	I1002 20:12:19.615146   32280 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 20:12:19.615152   32280 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 20:12:19.615161   32280 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 20:12:19.615168   32280 command_runner.go:130] > # creation as a file is not desired either.
	I1002 20:12:19.615175   32280 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 20:12:19.615182   32280 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 20:12:19.615187   32280 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 20:12:19.615190   32280 command_runner.go:130] > # ]
	I1002 20:12:19.615195   32280 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 20:12:19.615201   32280 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 20:12:19.615207   32280 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 20:12:19.615214   32280 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 20:12:19.615216   32280 command_runner.go:130] > #
	I1002 20:12:19.615221   32280 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 20:12:19.615227   32280 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 20:12:19.615231   32280 command_runner.go:130] > # runtime_type = "oci"
	I1002 20:12:19.615237   32280 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 20:12:19.615241   32280 command_runner.go:130] > # inherit_default_runtime = false
	I1002 20:12:19.615246   32280 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 20:12:19.615252   32280 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 20:12:19.615256   32280 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 20:12:19.615262   32280 command_runner.go:130] > # monitor_env = []
	I1002 20:12:19.615266   32280 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 20:12:19.615270   32280 command_runner.go:130] > # allowed_annotations = []
	I1002 20:12:19.615278   32280 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 20:12:19.615282   32280 command_runner.go:130] > # no_sync_log = false
	I1002 20:12:19.615288   32280 command_runner.go:130] > # default_annotations = {}
	I1002 20:12:19.615293   32280 command_runner.go:130] > # stream_websockets = false
	I1002 20:12:19.615299   32280 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:12:19.615333   32280 command_runner.go:130] > # Where:
	I1002 20:12:19.615343   32280 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 20:12:19.615349   32280 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 20:12:19.615354   32280 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 20:12:19.615363   32280 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 20:12:19.615366   32280 command_runner.go:130] > #   in $PATH.
	I1002 20:12:19.615375   32280 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 20:12:19.615380   32280 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 20:12:19.615387   32280 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 20:12:19.615391   32280 command_runner.go:130] > #   state.
	I1002 20:12:19.615400   32280 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 20:12:19.615413   32280 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 20:12:19.615421   32280 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 20:12:19.615428   32280 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 20:12:19.615435   32280 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 20:12:19.615441   32280 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 20:12:19.615446   32280 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 20:12:19.615452   32280 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 20:12:19.615458   32280 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 20:12:19.615465   32280 command_runner.go:130] > #   The currently recognized values are:
	I1002 20:12:19.615470   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 20:12:19.615479   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 20:12:19.615485   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 20:12:19.615490   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 20:12:19.615499   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 20:12:19.615505   32280 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 20:12:19.615514   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 20:12:19.615521   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 20:12:19.615529   32280 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 20:12:19.615534   32280 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 20:12:19.615541   32280 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 20:12:19.615549   32280 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 20:12:19.615555   32280 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 20:12:19.615564   32280 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 20:12:19.615569   32280 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 20:12:19.615579   32280 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 20:12:19.615586   32280 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 20:12:19.615589   32280 command_runner.go:130] > #   deprecated option "conmon".
	I1002 20:12:19.615596   32280 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 20:12:19.615601   32280 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 20:12:19.615607   32280 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 20:12:19.615614   32280 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 20:12:19.615621   32280 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 20:12:19.615628   32280 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 20:12:19.615634   32280 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 20:12:19.615638   32280 command_runner.go:130] > #   conmon-rs by using:
	I1002 20:12:19.615656   32280 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 20:12:19.615668   32280 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 20:12:19.615682   32280 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 20:12:19.615690   32280 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 20:12:19.615695   32280 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 20:12:19.615704   32280 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 20:12:19.615712   32280 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 20:12:19.615720   32280 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 20:12:19.615731   32280 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 20:12:19.615747   32280 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 20:12:19.615756   32280 command_runner.go:130] > #   when a machine crash happens.
	I1002 20:12:19.615765   32280 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 20:12:19.615774   32280 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 20:12:19.615784   32280 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 20:12:19.615788   32280 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 20:12:19.615797   32280 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 20:12:19.615804   32280 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 20:12:19.615810   32280 command_runner.go:130] > #
	I1002 20:12:19.615818   32280 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 20:12:19.615826   32280 command_runner.go:130] > #
	I1002 20:12:19.615838   32280 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 20:12:19.615850   32280 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 20:12:19.615854   32280 command_runner.go:130] > #
	I1002 20:12:19.615860   32280 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 20:12:19.615868   32280 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 20:12:19.615871   32280 command_runner.go:130] > #
	I1002 20:12:19.615880   32280 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 20:12:19.615889   32280 command_runner.go:130] > # feature.
	I1002 20:12:19.615894   32280 command_runner.go:130] > #
	I1002 20:12:19.615906   32280 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 20:12:19.615918   32280 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 20:12:19.615931   32280 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 20:12:19.615943   32280 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 20:12:19.615954   32280 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 20:12:19.615957   32280 command_runner.go:130] > #
	I1002 20:12:19.615964   32280 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 20:12:19.615972   32280 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 20:12:19.615977   32280 command_runner.go:130] > #
	I1002 20:12:19.615989   32280 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 20:12:19.616001   32280 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 20:12:19.616010   32280 command_runner.go:130] > #
	I1002 20:12:19.616019   32280 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 20:12:19.616031   32280 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 20:12:19.616039   32280 command_runner.go:130] > # limitation.
	I1002 20:12:19.616045   32280 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 20:12:19.616054   32280 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 20:12:19.616058   32280 command_runner.go:130] > runtime_type = ""
	I1002 20:12:19.616063   32280 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 20:12:19.616073   32280 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:12:19.616082   32280 command_runner.go:130] > runtime_config_path = ""
	I1002 20:12:19.616091   32280 command_runner.go:130] > container_min_memory = ""
	I1002 20:12:19.616098   32280 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:12:19.616107   32280 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:12:19.616115   32280 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:12:19.616124   32280 command_runner.go:130] > allowed_annotations = [
	I1002 20:12:19.616131   32280 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 20:12:19.616137   32280 command_runner.go:130] > ]
	I1002 20:12:19.616141   32280 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:12:19.616146   32280 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 20:12:19.616157   32280 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 20:12:19.616163   32280 command_runner.go:130] > runtime_type = ""
	I1002 20:12:19.616173   32280 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 20:12:19.616180   32280 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:12:19.616189   32280 command_runner.go:130] > runtime_config_path = ""
	I1002 20:12:19.616196   32280 command_runner.go:130] > container_min_memory = ""
	I1002 20:12:19.616206   32280 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:12:19.616215   32280 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:12:19.616221   32280 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:12:19.616228   32280 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:12:19.616238   32280 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 20:12:19.616247   32280 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 20:12:19.616258   32280 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 20:12:19.616272   32280 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 20:12:19.616289   32280 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 20:12:19.616305   32280 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 20:12:19.616314   32280 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 20:12:19.616323   32280 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 20:12:19.616340   32280 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 20:12:19.616353   32280 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 20:12:19.616366   32280 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 20:12:19.616380   32280 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 20:12:19.616387   32280 command_runner.go:130] > # Example:
	I1002 20:12:19.616393   32280 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 20:12:19.616401   32280 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 20:12:19.616408   32280 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 20:12:19.616420   32280 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 20:12:19.616430   32280 command_runner.go:130] > # cpuset = "0-1"
	I1002 20:12:19.616435   32280 command_runner.go:130] > # cpushares = "5"
	I1002 20:12:19.616442   32280 command_runner.go:130] > # cpuquota = "1000"
	I1002 20:12:19.616451   32280 command_runner.go:130] > # cpuperiod = "100000"
	I1002 20:12:19.616457   32280 command_runner.go:130] > # cpulimit = "35"
	I1002 20:12:19.616466   32280 command_runner.go:130] > # Where:
	I1002 20:12:19.616473   32280 command_runner.go:130] > # The workload name is workload-type.
	I1002 20:12:19.616483   32280 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 20:12:19.616489   32280 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 20:12:19.616502   32280 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 20:12:19.616516   32280 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 20:12:19.616528   32280 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 20:12:19.616541   32280 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 20:12:19.616551   32280 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 20:12:19.616560   32280 command_runner.go:130] > # Default value is set to true
	I1002 20:12:19.616566   32280 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 20:12:19.616574   32280 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 20:12:19.616582   32280 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 20:12:19.616592   32280 command_runner.go:130] > # Default value is set to 'false'
	I1002 20:12:19.616601   32280 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 20:12:19.616612   32280 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 20:12:19.616624   32280 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 20:12:19.616632   32280 command_runner.go:130] > # timezone = ""
	I1002 20:12:19.616642   32280 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 20:12:19.616658   32280 command_runner.go:130] > #
	I1002 20:12:19.616667   32280 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 20:12:19.616686   32280 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 20:12:19.616695   32280 command_runner.go:130] > [crio.image]
	I1002 20:12:19.616703   32280 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 20:12:19.616714   32280 command_runner.go:130] > # default_transport = "docker://"
	I1002 20:12:19.616725   32280 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 20:12:19.616732   32280 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:12:19.616739   32280 command_runner.go:130] > # global_auth_file = ""
	I1002 20:12:19.616751   32280 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 20:12:19.616762   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.616771   32280 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 20:12:19.616783   32280 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 20:12:19.616795   32280 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:12:19.616804   32280 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:12:19.616811   32280 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 20:12:19.616817   32280 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 20:12:19.616825   32280 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 20:12:19.616830   32280 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 20:12:19.616837   32280 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 20:12:19.616842   32280 command_runner.go:130] > # pause_command = "/pause"
	I1002 20:12:19.616852   32280 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 20:12:19.616864   32280 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 20:12:19.616877   32280 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 20:12:19.616889   32280 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 20:12:19.616899   32280 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 20:12:19.616911   32280 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 20:12:19.616918   32280 command_runner.go:130] > # pinned_images = [
	I1002 20:12:19.616921   32280 command_runner.go:130] > # ]
	I1002 20:12:19.616928   32280 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 20:12:19.616937   32280 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 20:12:19.616942   32280 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 20:12:19.616947   32280 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 20:12:19.616955   32280 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 20:12:19.616959   32280 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 20:12:19.616965   32280 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 20:12:19.616973   32280 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 20:12:19.616979   32280 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 20:12:19.616988   32280 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 20:12:19.616997   32280 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 20:12:19.617009   32280 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 20:12:19.617020   32280 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 20:12:19.617036   32280 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 20:12:19.617044   32280 command_runner.go:130] > # changing them here.
	I1002 20:12:19.617053   32280 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 20:12:19.617062   32280 command_runner.go:130] > # insecure_registries = [
	I1002 20:12:19.617066   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617073   32280 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 20:12:19.617078   32280 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 20:12:19.617084   32280 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 20:12:19.617089   32280 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 20:12:19.617095   32280 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 20:12:19.617101   32280 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 20:12:19.617107   32280 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 20:12:19.617111   32280 command_runner.go:130] > # auto_reload_registries = false
	I1002 20:12:19.617117   32280 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 20:12:19.617127   32280 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 20:12:19.617135   32280 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 20:12:19.617138   32280 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 20:12:19.617143   32280 command_runner.go:130] > # The mode of short name resolution.
	I1002 20:12:19.617149   32280 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 20:12:19.617158   32280 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 20:12:19.617163   32280 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 20:12:19.617169   32280 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 20:12:19.617175   32280 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 20:12:19.617182   32280 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 20:12:19.617186   32280 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 20:12:19.617192   32280 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 20:12:19.617197   32280 command_runner.go:130] > # CNI plugins.
	I1002 20:12:19.617200   32280 command_runner.go:130] > [crio.network]
	I1002 20:12:19.617206   32280 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 20:12:19.617212   32280 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 20:12:19.617219   32280 command_runner.go:130] > # cni_default_network = ""
	I1002 20:12:19.617231   32280 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 20:12:19.617240   32280 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 20:12:19.617246   32280 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 20:12:19.617250   32280 command_runner.go:130] > # plugin_dirs = [
	I1002 20:12:19.617254   32280 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 20:12:19.617256   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617261   32280 command_runner.go:130] > # List of included pod metrics.
	I1002 20:12:19.617266   32280 command_runner.go:130] > # included_pod_metrics = [
	I1002 20:12:19.617269   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617274   32280 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 20:12:19.617279   32280 command_runner.go:130] > [crio.metrics]
	I1002 20:12:19.617284   32280 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 20:12:19.617290   32280 command_runner.go:130] > # enable_metrics = false
	I1002 20:12:19.617294   32280 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 20:12:19.617298   32280 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 20:12:19.617306   32280 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 20:12:19.617312   32280 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 20:12:19.617320   32280 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 20:12:19.617323   32280 command_runner.go:130] > # metrics_collectors = [
	I1002 20:12:19.617327   32280 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 20:12:19.617331   32280 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 20:12:19.617334   32280 command_runner.go:130] > # 	"containers_oom_total",
	I1002 20:12:19.617338   32280 command_runner.go:130] > # 	"processes_defunct",
	I1002 20:12:19.617341   32280 command_runner.go:130] > # 	"operations_total",
	I1002 20:12:19.617345   32280 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 20:12:19.617348   32280 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 20:12:19.617352   32280 command_runner.go:130] > # 	"operations_errors_total",
	I1002 20:12:19.617355   32280 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 20:12:19.617359   32280 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 20:12:19.617363   32280 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 20:12:19.617367   32280 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 20:12:19.617371   32280 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 20:12:19.617375   32280 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 20:12:19.617379   32280 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 20:12:19.617383   32280 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 20:12:19.617388   32280 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 20:12:19.617391   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617397   32280 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 20:12:19.617403   32280 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 20:12:19.617407   32280 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 20:12:19.617411   32280 command_runner.go:130] > # metrics_port = 9090
	I1002 20:12:19.617415   32280 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 20:12:19.617419   32280 command_runner.go:130] > # metrics_socket = ""
	I1002 20:12:19.617423   32280 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 20:12:19.617429   32280 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 20:12:19.617437   32280 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 20:12:19.617441   32280 command_runner.go:130] > # certificate on any modification event.
	I1002 20:12:19.617447   32280 command_runner.go:130] > # metrics_cert = ""
	I1002 20:12:19.617452   32280 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 20:12:19.617456   32280 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 20:12:19.617460   32280 command_runner.go:130] > # metrics_key = ""
	I1002 20:12:19.617465   32280 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 20:12:19.617471   32280 command_runner.go:130] > [crio.tracing]
	I1002 20:12:19.617476   32280 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 20:12:19.617482   32280 command_runner.go:130] > # enable_tracing = false
	I1002 20:12:19.617488   32280 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 20:12:19.617494   32280 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 20:12:19.617500   32280 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 20:12:19.617506   32280 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 20:12:19.617511   32280 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 20:12:19.617514   32280 command_runner.go:130] > [crio.nri]
	I1002 20:12:19.617518   32280 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 20:12:19.617524   32280 command_runner.go:130] > # enable_nri = true
	I1002 20:12:19.617527   32280 command_runner.go:130] > # NRI socket to listen on.
	I1002 20:12:19.617533   32280 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 20:12:19.617539   32280 command_runner.go:130] > # NRI plugin directory to use.
	I1002 20:12:19.617543   32280 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 20:12:19.617547   32280 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 20:12:19.617552   32280 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 20:12:19.617560   32280 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 20:12:19.617591   32280 command_runner.go:130] > # nri_disable_connections = false
	I1002 20:12:19.617598   32280 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 20:12:19.617604   32280 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 20:12:19.617612   32280 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 20:12:19.617623   32280 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 20:12:19.617630   32280 command_runner.go:130] > # NRI default validator configuration.
	I1002 20:12:19.617637   32280 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 20:12:19.617645   32280 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 20:12:19.617661   32280 command_runner.go:130] > # can be restricted/rejected:
	I1002 20:12:19.617671   32280 command_runner.go:130] > # - OCI hook injection
	I1002 20:12:19.617683   32280 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 20:12:19.617691   32280 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 20:12:19.617696   32280 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 20:12:19.617702   32280 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 20:12:19.617708   32280 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 20:12:19.617715   32280 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 20:12:19.617720   32280 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 20:12:19.617722   32280 command_runner.go:130] > #
	I1002 20:12:19.617726   32280 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 20:12:19.617733   32280 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 20:12:19.617737   32280 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 20:12:19.617743   32280 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 20:12:19.617750   32280 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 20:12:19.617755   32280 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 20:12:19.617759   32280 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 20:12:19.617764   32280 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 20:12:19.617767   32280 command_runner.go:130] > # ]
	I1002 20:12:19.617771   32280 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 20:12:19.617779   32280 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 20:12:19.617782   32280 command_runner.go:130] > [crio.stats]
	I1002 20:12:19.617787   32280 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 20:12:19.617796   32280 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 20:12:19.617800   32280 command_runner.go:130] > # stats_collection_period = 0
	I1002 20:12:19.617807   32280 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 20:12:19.617812   32280 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 20:12:19.617819   32280 command_runner.go:130] > # collection_period = 0
	I1002 20:12:19.617847   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597735388Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 20:12:19.617857   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597762161Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 20:12:19.617879   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597788561Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 20:12:19.617891   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597814431Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 20:12:19.617901   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.597905829Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:12:19.617910   32280 command_runner.go:130] ! time="2025-10-02T20:12:19.59812179Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 20:12:19.617937   32280 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 20:12:19.618034   32280 cni.go:84] Creating CNI manager for ""
	I1002 20:12:19.618045   32280 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:12:19.618055   32280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:12:19.618074   32280 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753218 NodeName:functional-753218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:12:19.618185   32280 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:12:19.618237   32280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:12:19.625483   32280 command_runner.go:130] > kubeadm
	I1002 20:12:19.625499   32280 command_runner.go:130] > kubectl
	I1002 20:12:19.625503   32280 command_runner.go:130] > kubelet
	I1002 20:12:19.626080   32280 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:12:19.626131   32280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:12:19.633273   32280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:12:19.644695   32280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:12:19.656113   32280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:12:19.667414   32280 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:12:19.670740   32280 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 20:12:19.670794   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:19.752159   32280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:12:19.764280   32280 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218 for IP: 192.168.49.2
	I1002 20:12:19.764303   32280 certs.go:195] generating shared ca certs ...
	I1002 20:12:19.764324   32280 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:19.764461   32280 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:12:19.764507   32280 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:12:19.764516   32280 certs.go:257] generating profile certs ...
	I1002 20:12:19.764596   32280 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key
	I1002 20:12:19.764641   32280 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804
	I1002 20:12:19.764700   32280 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key
	I1002 20:12:19.764711   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:12:19.764723   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:12:19.764735   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:12:19.764749   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:12:19.764761   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:12:19.764773   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:12:19.764785   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:12:19.764797   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:12:19.764840   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:12:19.764868   32280 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:12:19.764878   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:12:19.764907   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:12:19.764932   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:12:19.764953   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:12:19.764991   32280 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:12:19.765016   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:19.765029   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.765042   32280 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:12:19.765474   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:12:19.782548   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:12:19.799734   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:12:19.816390   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:12:19.832589   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:12:19.848700   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:12:19.864849   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:12:19.880775   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:12:19.896846   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:12:19.913614   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:12:19.929578   32280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:12:19.945677   32280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:12:19.957745   32280 ssh_runner.go:195] Run: openssl version
	I1002 20:12:19.963258   32280 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 20:12:19.963501   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:12:19.971695   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975234   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975257   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:12:19.975294   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:12:20.009021   32280 command_runner.go:130] > 51391683
	I1002 20:12:20.009100   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:12:20.016966   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:12:20.025422   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029194   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029238   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.029282   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:12:20.064218   32280 command_runner.go:130] > 3ec20f2e
	I1002 20:12:20.064321   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:12:20.072502   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:12:20.080739   32280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084507   32280 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084542   32280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.084576   32280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:12:20.118973   32280 command_runner.go:130] > b5213941
	I1002 20:12:20.119045   32280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:12:20.127219   32280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:12:20.130733   32280 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:12:20.130756   32280 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 20:12:20.130765   32280 command_runner.go:130] > Device: 8,1	Inode: 579408      Links: 1
	I1002 20:12:20.130774   32280 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:12:20.130783   32280 command_runner.go:130] > Access: 2025-10-02 20:08:10.644972655 +0000
	I1002 20:12:20.130793   32280 command_runner.go:130] > Modify: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130799   32280 command_runner.go:130] > Change: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130806   32280 command_runner.go:130] >  Birth: 2025-10-02 20:04:06.596879146 +0000
	I1002 20:12:20.130872   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:12:20.164340   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.164601   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:12:20.199434   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.199512   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:12:20.233489   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.233589   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:12:20.266980   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.267235   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:12:20.300792   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.301105   32280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:12:20.334621   32280 command_runner.go:130] > Certificate will not expire
	I1002 20:12:20.334895   32280 kubeadm.go:400] StartCluster: {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:12:20.334978   32280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:12:20.335040   32280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:12:20.362233   32280 cri.go:89] found id: ""
	I1002 20:12:20.362287   32280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:12:20.370000   32280 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 20:12:20.370022   32280 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 20:12:20.370028   32280 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 20:12:20.370045   32280 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:12:20.370050   32280 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:12:20.370092   32280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:12:20.377231   32280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:12:20.377306   32280 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753218" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.377343   32280 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753218" cluster setting kubeconfig missing "functional-753218" context setting]
	I1002 20:12:20.377618   32280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.379016   32280 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.379143   32280 kapi.go:59] client config for functional-753218: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:12:20.379525   32280 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:12:20.379543   32280 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:12:20.379548   32280 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:12:20.379552   32280 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:12:20.379556   32280 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:12:20.379580   32280 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:12:20.379896   32280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:12:20.387047   32280 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:12:20.387086   32280 kubeadm.go:601] duration metric: took 17.030465ms to restartPrimaryControlPlane
	I1002 20:12:20.387097   32280 kubeadm.go:402] duration metric: took 52.210982ms to StartCluster
	I1002 20:12:20.387113   32280 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.387221   32280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.387762   32280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:12:20.387978   32280 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:12:20.388069   32280 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:12:20.388123   32280 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:12:20.388170   32280 addons.go:69] Setting storage-provisioner=true in profile "functional-753218"
	I1002 20:12:20.388189   32280 addons.go:238] Setting addon storage-provisioner=true in "functional-753218"
	I1002 20:12:20.388224   32280 host.go:66] Checking if "functional-753218" exists ...
	I1002 20:12:20.388188   32280 addons.go:69] Setting default-storageclass=true in profile "functional-753218"
	I1002 20:12:20.388303   32280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753218"
	I1002 20:12:20.388534   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.388593   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.390858   32280 out.go:179] * Verifying Kubernetes components...
	I1002 20:12:20.392041   32280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:12:20.408831   32280 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:12:20.409013   32280 kapi.go:59] client config for functional-753218: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:12:20.409334   32280 addons.go:238] Setting addon default-storageclass=true in "functional-753218"
	I1002 20:12:20.409372   32280 host.go:66] Checking if "functional-753218" exists ...
	I1002 20:12:20.409857   32280 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:12:20.409921   32280 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:12:20.411389   32280 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:20.411408   32280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:12:20.411451   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:20.434249   32280 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.434269   32280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:12:20.434323   32280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:12:20.437366   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:20.453124   32280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:12:20.491163   32280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:12:20.504681   32280 node_ready.go:35] waiting up to 6m0s for node "functional-753218" to be "Ready" ...
	I1002 20:12:20.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:12:20.504901   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:20.505187   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:20.544925   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:20.560749   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.598254   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.598305   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.598334   32280 retry.go:31] will retry after 360.790251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.611750   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.611829   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.611854   32280 retry.go:31] will retry after 210.270105ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.822270   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:20.872283   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:20.874485   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.874514   32280 retry.go:31] will retry after 244.966298ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:20.959846   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.005341   32280 type.go:168] "Request Body" body=""
	I1002 20:12:21.005421   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:21.005781   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:21.012418   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.012451   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.012466   32280 retry.go:31] will retry after 409.292121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.119728   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:21.168429   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.170739   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.170771   32280 retry.go:31] will retry after 294.217693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.422106   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.465688   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:21.470239   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.472502   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.472537   32280 retry.go:31] will retry after 332.995728ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.505685   32280 type.go:168] "Request Body" body=""
	I1002 20:12:21.505778   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:21.506123   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:21.516911   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.516971   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.516996   32280 retry.go:31] will retry after 954.810325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.806393   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:21.857573   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:21.857614   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:21.857637   32280 retry.go:31] will retry after 1.033500231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.004877   32280 type.go:168] "Request Body" body=""
	I1002 20:12:22.004976   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:22.005310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:22.472906   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:22.505435   32280 type.go:168] "Request Body" body=""
	I1002 20:12:22.505517   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:22.505893   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:22.505957   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:22.524411   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:22.524454   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.524474   32280 retry.go:31] will retry after 931.915639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.892005   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:22.942851   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:22.942928   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:22.942955   32280 retry.go:31] will retry after 1.834952264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.004927   32280 type.go:168] "Request Body" body=""
	I1002 20:12:23.005007   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:23.005354   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:23.456821   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:23.505094   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.505147   32280 type.go:168] "Request Body" body=""
	I1002 20:12:23.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:23.505484   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:23.507597   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:23.507626   32280 retry.go:31] will retry after 2.313716894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:24.005157   32280 type.go:168] "Request Body" body=""
	I1002 20:12:24.005267   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:24.005594   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:24.505508   32280 type.go:168] "Request Body" body=""
	I1002 20:12:24.505632   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:24.506012   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:24.506092   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:24.778419   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:24.830315   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:24.830361   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:24.830382   32280 retry.go:31] will retry after 2.530323246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:25.005736   32280 type.go:168] "Request Body" body=""
	I1002 20:12:25.005808   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:25.006117   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:25.504853   32280 type.go:168] "Request Body" body=""
	I1002 20:12:25.504920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:25.505273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:25.821714   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:25.872812   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:25.872859   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:25.872881   32280 retry.go:31] will retry after 1.957365536s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:26.005078   32280 type.go:168] "Request Body" body=""
	I1002 20:12:26.005153   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:26.005503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:26.505250   32280 type.go:168] "Request Body" body=""
	I1002 20:12:26.505323   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:26.505711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:27.005530   32280 type.go:168] "Request Body" body=""
	I1002 20:12:27.005599   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:27.005959   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:27.006023   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:27.361473   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:27.411520   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:27.413776   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.413807   32280 retry.go:31] will retry after 3.768585845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.504922   32280 type.go:168] "Request Body" body=""
	I1002 20:12:27.505019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:27.505351   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:27.830904   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:27.880071   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:27.882324   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:27.882350   32280 retry.go:31] will retry after 2.676983733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:28.005719   32280 type.go:168] "Request Body" body=""
	I1002 20:12:28.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:28.006101   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:28.504826   32280 type.go:168] "Request Body" body=""
	I1002 20:12:28.504909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:28.505226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:29.004968   32280 type.go:168] "Request Body" body=""
	I1002 20:12:29.005052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:29.005361   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:29.505178   32280 type.go:168] "Request Body" body=""
	I1002 20:12:29.505270   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:29.505576   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:29.505628   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:30.005335   32280 type.go:168] "Request Body" body=""
	I1002 20:12:30.005400   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:30.005747   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:30.505557   32280 type.go:168] "Request Body" body=""
	I1002 20:12:30.505643   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:30.505971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:30.560186   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:30.610807   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:30.610870   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:30.610892   32280 retry.go:31] will retry after 7.973230912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.005274   32280 type.go:168] "Request Body" body=""
	I1002 20:12:31.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:31.005789   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:31.182990   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:31.231953   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:31.234462   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.234491   32280 retry.go:31] will retry after 5.687657455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:31.504842   32280 type.go:168] "Request Body" body=""
	I1002 20:12:31.504956   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:31.505254   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:32.005885   32280 type.go:168] "Request Body" body=""
	I1002 20:12:32.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:32.006262   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:32.006314   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:32.504840   32280 type.go:168] "Request Body" body=""
	I1002 20:12:32.504910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:32.505216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:33.005827   32280 type.go:168] "Request Body" body=""
	I1002 20:12:33.005903   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:33.006210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:33.505861   32280 type.go:168] "Request Body" body=""
	I1002 20:12:33.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:33.506234   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:34.005834   32280 type.go:168] "Request Body" body=""
	I1002 20:12:34.005939   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:34.006292   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:34.006347   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:34.505067   32280 type.go:168] "Request Body" body=""
	I1002 20:12:34.505178   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:34.505476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:35.005027   32280 type.go:168] "Request Body" body=""
	I1002 20:12:35.005102   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:35.005423   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:35.504956   32280 type.go:168] "Request Body" body=""
	I1002 20:12:35.505018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:35.505338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:36.004897   32280 type.go:168] "Request Body" body=""
	I1002 20:12:36.005010   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:36.005325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:36.504908   32280 type.go:168] "Request Body" body=""
	I1002 20:12:36.504975   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:36.505273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:36.505325   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:36.922844   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:36.972691   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:36.975093   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:36.975120   32280 retry.go:31] will retry after 6.057609391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:37.005334   32280 type.go:168] "Request Body" body=""
	I1002 20:12:37.005422   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:37.005758   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:37.505360   32280 type.go:168] "Request Body" body=""
	I1002 20:12:37.505473   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:37.505826   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:38.005595   32280 type.go:168] "Request Body" body=""
	I1002 20:12:38.005685   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:38.005995   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:38.505731   32280 type.go:168] "Request Body" body=""
	I1002 20:12:38.505833   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:38.506204   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:38.506258   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:38.584343   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:38.634498   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:38.634541   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:38.634559   32280 retry.go:31] will retry after 11.473349324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:39.004966   32280 type.go:168] "Request Body" body=""
	I1002 20:12:39.005047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:39.005329   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:39.505287   32280 type.go:168] "Request Body" body=""
	I1002 20:12:39.505349   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:39.505690   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:40.005217   32280 type.go:168] "Request Body" body=""
	I1002 20:12:40.005283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:40.005689   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:40.505522   32280 type.go:168] "Request Body" body=""
	I1002 20:12:40.505586   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:40.505931   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:41.005519   32280 type.go:168] "Request Body" body=""
	I1002 20:12:41.005620   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:41.005984   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:41.006049   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:41.505595   32280 type.go:168] "Request Body" body=""
	I1002 20:12:41.505678   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:41.506021   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:42.005588   32280 type.go:168] "Request Body" body=""
	I1002 20:12:42.005666   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:42.005990   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:42.505580   32280 type.go:168] "Request Body" body=""
	I1002 20:12:42.505660   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:42.506010   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:43.005624   32280 type.go:168] "Request Body" body=""
	I1002 20:12:43.005704   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:43.006025   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:43.006077   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:43.033216   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:43.084626   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:43.084680   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:43.084700   32280 retry.go:31] will retry after 13.696949746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:43.504971   32280 type.go:168] "Request Body" body=""
	I1002 20:12:43.505052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:43.505379   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:44.004912   32280 type.go:168] "Request Body" body=""
	I1002 20:12:44.004988   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:44.005285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:44.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:12:44.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:44.505402   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:45.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:12:45.005026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:45.005321   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:45.504904   32280 type.go:168] "Request Body" body=""
	I1002 20:12:45.504997   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:45.505300   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:45.505354   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:46.004960   32280 type.go:168] "Request Body" body=""
	I1002 20:12:46.005023   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:46.005350   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:46.504882   32280 type.go:168] "Request Body" body=""
	I1002 20:12:46.505005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:46.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:47.004909   32280 type.go:168] "Request Body" body=""
	I1002 20:12:47.004973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:47.005265   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:47.505882   32280 type.go:168] "Request Body" body=""
	I1002 20:12:47.506000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:47.506320   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:47.506400   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:48.004928   32280 type.go:168] "Request Body" body=""
	I1002 20:12:48.005004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:48.005305   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:48.504865   32280 type.go:168] "Request Body" body=""
	I1002 20:12:48.504959   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:48.505270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:49.004954   32280 type.go:168] "Request Body" body=""
	I1002 20:12:49.005020   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:49.005323   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:49.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:12:49.505108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:49.505418   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:50.004957   32280 type.go:168] "Request Body" body=""
	I1002 20:12:50.005023   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:50.005336   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:50.005399   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:50.108603   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:50.158622   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:50.158675   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:50.158705   32280 retry.go:31] will retry after 7.866512619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:50.505487   32280 type.go:168] "Request Body" body=""
	I1002 20:12:50.505555   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:50.505903   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:51.005559   32280 type.go:168] "Request Body" body=""
	I1002 20:12:51.005635   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:51.005990   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:51.505707   32280 type.go:168] "Request Body" body=""
	I1002 20:12:51.505791   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:51.506153   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:52.005777   32280 type.go:168] "Request Body" body=""
	I1002 20:12:52.005901   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:52.006225   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:52.006281   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:52.504874   32280 type.go:168] "Request Body" body=""
	I1002 20:12:52.504935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:52.505268   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:53.005873   32280 type.go:168] "Request Body" body=""
	I1002 20:12:53.005951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:53.006260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:53.504881   32280 type.go:168] "Request Body" body=""
	I1002 20:12:53.505006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:53.505318   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:54.004965   32280 type.go:168] "Request Body" body=""
	I1002 20:12:54.005040   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:54.005355   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:54.505336   32280 type.go:168] "Request Body" body=""
	I1002 20:12:54.505429   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:54.505803   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:54.505860   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:55.005500   32280 type.go:168] "Request Body" body=""
	I1002 20:12:55.005582   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:55.005971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:55.505630   32280 type.go:168] "Request Body" body=""
	I1002 20:12:55.505727   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:55.506074   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:56.005749   32280 type.go:168] "Request Body" body=""
	I1002 20:12:56.005828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:56.006175   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:56.505817   32280 type.go:168] "Request Body" body=""
	I1002 20:12:56.505912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:56.506244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:56.506305   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:56.782639   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:12:56.831722   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:56.833971   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:56.834005   32280 retry.go:31] will retry after 8.803585786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:57.005357   32280 type.go:168] "Request Body" body=""
	I1002 20:12:57.005440   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:57.005756   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:57.505340   32280 type.go:168] "Request Body" body=""
	I1002 20:12:57.505420   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:57.505751   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:58.005333   32280 type.go:168] "Request Body" body=""
	I1002 20:12:58.005402   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:58.005752   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:58.025966   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:12:58.074036   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:12:58.076335   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:58.076367   32280 retry.go:31] will retry after 21.837732561s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:12:58.504884   32280 type.go:168] "Request Body" body=""
	I1002 20:12:58.504952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:58.505269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:12:59.005019   32280 type.go:168] "Request Body" body=""
	I1002 20:12:59.005088   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:59.005416   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:12:59.005476   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:12:59.505294   32280 type.go:168] "Request Body" body=""
	I1002 20:12:59.505371   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:12:59.505719   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:00.005587   32280 type.go:168] "Request Body" body=""
	I1002 20:13:00.005681   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:00.006070   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:00.505895   32280 type.go:168] "Request Body" body=""
	I1002 20:13:00.505970   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:00.506282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:01.005032   32280 type.go:168] "Request Body" body=""
	I1002 20:13:01.005101   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:01.005454   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:01.005507   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:01.505230   32280 type.go:168] "Request Body" body=""
	I1002 20:13:01.505332   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:01.505713   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:02.005565   32280 type.go:168] "Request Body" body=""
	I1002 20:13:02.005638   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:02.005989   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:02.505747   32280 type.go:168] "Request Body" body=""
	I1002 20:13:02.505834   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:02.506161   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:03.004921   32280 type.go:168] "Request Body" body=""
	I1002 20:13:03.004999   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:03.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:03.505030   32280 type.go:168] "Request Body" body=""
	I1002 20:13:03.505163   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:03.505496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:03.505553   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:04.005013   32280 type.go:168] "Request Body" body=""
	I1002 20:13:04.005102   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:04.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:04.505235   32280 type.go:168] "Request Body" body=""
	I1002 20:13:04.505310   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:04.505603   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:05.005373   32280 type.go:168] "Request Body" body=""
	I1002 20:13:05.005436   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:05.005779   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:05.505626   32280 type.go:168] "Request Body" body=""
	I1002 20:13:05.505713   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:05.506017   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:05.506071   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:05.638454   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:05.690182   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:05.690237   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:05.690256   32280 retry.go:31] will retry after 17.824989731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:06.005701   32280 type.go:168] "Request Body" body=""
	I1002 20:13:06.005799   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:06.006119   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:06.504842   32280 type.go:168] "Request Body" body=""
	I1002 20:13:06.504914   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:06.505251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:07.005004   32280 type.go:168] "Request Body" body=""
	I1002 20:13:07.005108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:07.005436   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:07.505210   32280 type.go:168] "Request Body" body=""
	I1002 20:13:07.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:07.505609   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:08.005363   32280 type.go:168] "Request Body" body=""
	I1002 20:13:08.005446   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:08.005783   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:08.005845   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:08.505633   32280 type.go:168] "Request Body" body=""
	I1002 20:13:08.505725   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:08.506087   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:09.004810   32280 type.go:168] "Request Body" body=""
	I1002 20:13:09.004939   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:09.005246   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:09.505036   32280 type.go:168] "Request Body" body=""
	I1002 20:13:09.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:09.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:10.005227   32280 type.go:168] "Request Body" body=""
	I1002 20:13:10.005294   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:10.005624   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:10.505218   32280 type.go:168] "Request Body" body=""
	I1002 20:13:10.505284   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:10.505609   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:10.505692   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:11.005490   32280 type.go:168] "Request Body" body=""
	I1002 20:13:11.005558   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:11.005879   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:11.505739   32280 type.go:168] "Request Body" body=""
	I1002 20:13:11.505817   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:11.506182   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:12.004937   32280 type.go:168] "Request Body" body=""
	I1002 20:13:12.005026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:12.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:12.505102   32280 type.go:168] "Request Body" body=""
	I1002 20:13:12.505168   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:12.505509   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:13.005242   32280 type.go:168] "Request Body" body=""
	I1002 20:13:13.005316   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:13.005692   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:13.005741   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:13.505519   32280 type.go:168] "Request Body" body=""
	I1002 20:13:13.505584   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:13.505958   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:14.005767   32280 type.go:168] "Request Body" body=""
	I1002 20:13:14.005841   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:14.006164   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:14.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:13:14.505069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:14.505397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:15.005101   32280 type.go:168] "Request Body" body=""
	I1002 20:13:15.005189   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:15.005569   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:15.505328   32280 type.go:168] "Request Body" body=""
	I1002 20:13:15.505404   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:15.505799   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:15.505864   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:16.005581   32280 type.go:168] "Request Body" body=""
	I1002 20:13:16.005659   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:16.006015   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:16.505815   32280 type.go:168] "Request Body" body=""
	I1002 20:13:16.505909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:16.506240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:17.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:13:17.004989   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:17.005317   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:17.505042   32280 type.go:168] "Request Body" body=""
	I1002 20:13:17.505108   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:17.505466   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:18.005185   32280 type.go:168] "Request Body" body=""
	I1002 20:13:18.005248   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:18.005594   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:18.005675   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:18.505365   32280 type.go:168] "Request Body" body=""
	I1002 20:13:18.505431   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:18.505829   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.005617   32280 type.go:168] "Request Body" body=""
	I1002 20:13:19.005703   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:19.006054   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.505860   32280 type.go:168] "Request Body" body=""
	I1002 20:13:19.505925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:19.506274   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:19.914795   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:13:19.964946   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:19.964982   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:19.964998   32280 retry.go:31] will retry after 37.877741779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:20.005163   32280 type.go:168] "Request Body" body=""
	I1002 20:13:20.005260   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:20.005579   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:20.505603   32280 type.go:168] "Request Body" body=""
	I1002 20:13:20.505696   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:20.506040   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:20.506105   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:21.005687   32280 type.go:168] "Request Body" body=""
	I1002 20:13:21.005752   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:21.006074   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:21.505754   32280 type.go:168] "Request Body" body=""
	I1002 20:13:21.505828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:21.506211   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:22.005841   32280 type.go:168] "Request Body" body=""
	I1002 20:13:22.005906   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:22.006231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:22.505901   32280 type.go:168] "Request Body" body=""
	I1002 20:13:22.506010   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:22.506365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:22.506463   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:23.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:13:23.005035   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:23.005390   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:23.504963   32280 type.go:168] "Request Body" body=""
	I1002 20:13:23.505048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:23.505365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:23.515608   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:23.566822   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:23.566879   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:23.566903   32280 retry.go:31] will retry after 23.13190401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:13:24.005366   32280 type.go:168] "Request Body" body=""
	I1002 20:13:24.005433   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:24.005789   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:24.505700   32280 type.go:168] "Request Body" body=""
	I1002 20:13:24.505774   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:24.506172   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:25.005817   32280 type.go:168] "Request Body" body=""
	I1002 20:13:25.005885   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:25.006218   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:25.006274   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:25.505892   32280 type.go:168] "Request Body" body=""
	I1002 20:13:25.505960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:25.506325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:26.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:13:26.005093   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:26.005420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:26.505011   32280 type.go:168] "Request Body" body=""
	I1002 20:13:26.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:26.505477   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:27.005016   32280 type.go:168] "Request Body" body=""
	I1002 20:13:27.005085   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:27.005415   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:27.505000   32280 type.go:168] "Request Body" body=""
	I1002 20:13:27.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:27.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:27.505471   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:28.004995   32280 type.go:168] "Request Body" body=""
	I1002 20:13:28.005059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:28.005387   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:28.504979   32280 type.go:168] "Request Body" body=""
	I1002 20:13:28.505063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:28.505426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:29.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:13:29.005070   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:29.005385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:29.505292   32280 type.go:168] "Request Body" body=""
	I1002 20:13:29.505364   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:29.505745   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:29.505830   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:30.005263   32280 type.go:168] "Request Body" body=""
	I1002 20:13:30.005354   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:30.005711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:30.505565   32280 type.go:168] "Request Body" body=""
	I1002 20:13:30.505630   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:30.505975   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:31.005629   32280 type.go:168] "Request Body" body=""
	I1002 20:13:31.005725   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:31.006066   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:31.505717   32280 type.go:168] "Request Body" body=""
	I1002 20:13:31.505806   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:31.506146   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:31.506205   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:32.005772   32280 type.go:168] "Request Body" body=""
	I1002 20:13:32.005834   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:32.006141   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:32.505757   32280 type.go:168] "Request Body" body=""
	I1002 20:13:32.505827   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:32.506202   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:33.005813   32280 type.go:168] "Request Body" body=""
	I1002 20:13:33.005879   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:33.006207   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:33.505854   32280 type.go:168] "Request Body" body=""
	I1002 20:13:33.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:33.506299   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:33.506364   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:34.004865   32280 type.go:168] "Request Body" body=""
	I1002 20:13:34.004937   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:34.005277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:34.505059   32280 type.go:168] "Request Body" body=""
	I1002 20:13:34.505145   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:34.505557   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:35.005136   32280 type.go:168] "Request Body" body=""
	I1002 20:13:35.005210   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:35.005522   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:35.505130   32280 type.go:168] "Request Body" body=""
	I1002 20:13:35.505200   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:35.505574   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:36.005135   32280 type.go:168] "Request Body" body=""
	I1002 20:13:36.005205   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:36.005539   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:36.005593   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:36.505113   32280 type.go:168] "Request Body" body=""
	I1002 20:13:36.505181   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:36.505623   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:37.005206   32280 type.go:168] "Request Body" body=""
	I1002 20:13:37.005280   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:37.005599   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:37.505187   32280 type.go:168] "Request Body" body=""
	I1002 20:13:37.505253   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:37.505612   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:38.005212   32280 type.go:168] "Request Body" body=""
	I1002 20:13:38.005309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:38.005632   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:38.005716   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:38.505228   32280 type.go:168] "Request Body" body=""
	I1002 20:13:38.505309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:38.505743   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:39.005283   32280 type.go:168] "Request Body" body=""
	I1002 20:13:39.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:39.005688   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:39.505535   32280 type.go:168] "Request Body" body=""
	I1002 20:13:39.505601   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:39.505971   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:40.005741   32280 type.go:168] "Request Body" body=""
	I1002 20:13:40.005811   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:40.006142   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:40.006200   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:40.504914   32280 type.go:168] "Request Body" body=""
	I1002 20:13:40.504981   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:40.505341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:41.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:13:41.004987   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:41.005308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:41.504899   32280 type.go:168] "Request Body" body=""
	I1002 20:13:41.504961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:41.505269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:42.004835   32280 type.go:168] "Request Body" body=""
	I1002 20:13:42.004910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:42.005229   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:42.504815   32280 type.go:168] "Request Body" body=""
	I1002 20:13:42.504896   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:42.505252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:42.505312   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:43.004917   32280 type.go:168] "Request Body" body=""
	I1002 20:13:43.004992   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:43.005315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:43.504906   32280 type.go:168] "Request Body" body=""
	I1002 20:13:43.504998   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:43.505371   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:44.004978   32280 type.go:168] "Request Body" body=""
	I1002 20:13:44.005041   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:44.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:44.505515   32280 type.go:168] "Request Body" body=""
	I1002 20:13:44.505582   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:44.505949   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:44.505999   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:45.005614   32280 type.go:168] "Request Body" body=""
	I1002 20:13:45.005720   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:45.006047   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:45.505675   32280 type.go:168] "Request Body" body=""
	I1002 20:13:45.505766   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:45.506082   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:46.005784   32280 type.go:168] "Request Body" body=""
	I1002 20:13:46.005862   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:46.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:46.505803   32280 type.go:168] "Request Body" body=""
	I1002 20:13:46.505894   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:46.506217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:46.506269   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:46.699644   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:13:46.747344   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:46.749844   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:46.749973   32280 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:13:47.005313   32280 type.go:168] "Request Body" body=""
	I1002 20:13:47.005446   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:47.005788   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:47.505665   32280 type.go:168] "Request Body" body=""
	I1002 20:13:47.505730   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:47.506069   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:48.005897   32280 type.go:168] "Request Body" body=""
	I1002 20:13:48.005960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:48.006265   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:48.505011   32280 type.go:168] "Request Body" body=""
	I1002 20:13:48.505103   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:48.505428   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:49.005178   32280 type.go:168] "Request Body" body=""
	I1002 20:13:49.005244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:49.005588   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:49.005688   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:49.505357   32280 type.go:168] "Request Body" body=""
	I1002 20:13:49.505439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:49.505750   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:50.005608   32280 type.go:168] "Request Body" body=""
	I1002 20:13:50.005698   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:50.006038   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:50.504871   32280 type.go:168] "Request Body" body=""
	I1002 20:13:50.504975   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:50.505342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:51.005115   32280 type.go:168] "Request Body" body=""
	I1002 20:13:51.005179   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:51.005488   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:51.505213   32280 type.go:168] "Request Body" body=""
	I1002 20:13:51.505301   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:51.505613   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:51.505717   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:52.005522   32280 type.go:168] "Request Body" body=""
	I1002 20:13:52.005612   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:52.005939   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:52.505742   32280 type.go:168] "Request Body" body=""
	I1002 20:13:52.505819   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:52.506150   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:53.004884   32280 type.go:168] "Request Body" body=""
	I1002 20:13:53.004954   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:53.005266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:53.505024   32280 type.go:168] "Request Body" body=""
	I1002 20:13:53.505129   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:53.505472   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:54.005163   32280 type.go:168] "Request Body" body=""
	I1002 20:13:54.005247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:54.005550   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:54.005630   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:54.505374   32280 type.go:168] "Request Body" body=""
	I1002 20:13:54.505439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:54.505844   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:55.005681   32280 type.go:168] "Request Body" body=""
	I1002 20:13:55.005746   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:55.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:55.504859   32280 type.go:168] "Request Body" body=""
	I1002 20:13:55.504950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:55.505290   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:56.004973   32280 type.go:168] "Request Body" body=""
	I1002 20:13:56.005052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:56.005360   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:56.505092   32280 type.go:168] "Request Body" body=""
	I1002 20:13:56.505157   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:56.505484   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:56.505543   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:57.005232   32280 type.go:168] "Request Body" body=""
	I1002 20:13:57.005319   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:57.005627   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:57.505479   32280 type.go:168] "Request Body" body=""
	I1002 20:13:57.505542   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:57.505874   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:57.843521   32280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:13:57.893953   32280 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:57.894023   32280 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:13:57.894118   32280 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:13:57.896474   32280 out.go:179] * Enabled addons: 
	I1002 20:13:57.898063   32280 addons.go:514] duration metric: took 1m37.510002204s for enable addons: enabled=[]
	I1002 20:13:58.005248   32280 type.go:168] "Request Body" body=""
	I1002 20:13:58.005351   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:58.005671   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:58.505487   32280 type.go:168] "Request Body" body=""
	I1002 20:13:58.505565   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:58.505958   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:13:58.506014   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:13:59.005771   32280 type.go:168] "Request Body" body=""
	I1002 20:13:59.005876   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:59.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:13:59.504962   32280 type.go:168] "Request Body" body=""
	I1002 20:13:59.505047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:13:59.505359   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:00.005006   32280 type.go:168] "Request Body" body=""
	I1002 20:14:00.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:00.005392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:00.505111   32280 type.go:168] "Request Body" body=""
	I1002 20:14:00.505199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:00.505503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:01.005227   32280 type.go:168] "Request Body" body=""
	I1002 20:14:01.005326   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:01.005717   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:01.005789   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:01.505598   32280 type.go:168] "Request Body" body=""
	I1002 20:14:01.505687   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:01.506000   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:02.005861   32280 type.go:168] "Request Body" body=""
	I1002 20:14:02.005935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:02.006338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:02.504980   32280 type.go:168] "Request Body" body=""
	I1002 20:14:02.505043   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:02.505444   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:03.005199   32280 type.go:168] "Request Body" body=""
	I1002 20:14:03.005295   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:03.005617   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:03.505417   32280 type.go:168] "Request Body" body=""
	I1002 20:14:03.505500   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:03.505831   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:03.505910   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:04.005688   32280 type.go:168] "Request Body" body=""
	I1002 20:14:04.005768   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:04.006079   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:04.505822   32280 type.go:168] "Request Body" body=""
	I1002 20:14:04.505929   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:04.506212   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:05.004939   32280 type.go:168] "Request Body" body=""
	I1002 20:14:05.005032   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:05.005365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:05.505085   32280 type.go:168] "Request Body" body=""
	I1002 20:14:05.505167   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:05.505489   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:06.005229   32280 type.go:168] "Request Body" body=""
	I1002 20:14:06.005293   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:06.005679   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:06.005733   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:06.505561   32280 type.go:168] "Request Body" body=""
	I1002 20:14:06.505662   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:06.505997   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:07.005758   32280 type.go:168] "Request Body" body=""
	I1002 20:14:07.005865   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:07.006186   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:07.504924   32280 type.go:168] "Request Body" body=""
	I1002 20:14:07.504999   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:07.505319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:08.005020   32280 type.go:168] "Request Body" body=""
	I1002 20:14:08.005110   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:08.005412   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:08.505144   32280 type.go:168] "Request Body" body=""
	I1002 20:14:08.505221   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:08.505546   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:08.505597   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:09.005324   32280 type.go:168] "Request Body" body=""
	I1002 20:14:09.005388   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:09.005759   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:09.505663   32280 type.go:168] "Request Body" body=""
	I1002 20:14:09.505738   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:09.506059   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:10.004831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:10.004913   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:10.005285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:10.504951   32280 type.go:168] "Request Body" body=""
	I1002 20:14:10.505047   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:10.505396   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:11.005158   32280 type.go:168] "Request Body" body=""
	I1002 20:14:11.005275   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:11.005733   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:11.005797   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:11.505549   32280 type.go:168] "Request Body" body=""
	I1002 20:14:11.505697   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:11.506073   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:12.005903   32280 type.go:168] "Request Body" body=""
	I1002 20:14:12.005966   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:12.006268   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:12.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:14:12.505086   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:12.505427   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:13.004849   32280 type.go:168] "Request Body" body=""
	I1002 20:14:13.004968   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:13.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:13.505032   32280 type.go:168] "Request Body" body=""
	I1002 20:14:13.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:13.505438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:13.505493   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:14.005138   32280 type.go:168] "Request Body" body=""
	I1002 20:14:14.005202   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:14.005533   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:14.505306   32280 type.go:168] "Request Body" body=""
	I1002 20:14:14.505402   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:14.505762   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:15.005543   32280 type.go:168] "Request Body" body=""
	I1002 20:14:15.005604   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:15.005962   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:15.505741   32280 type.go:168] "Request Body" body=""
	I1002 20:14:15.505841   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:15.506168   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:15.506245   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:16.005122   32280 type.go:168] "Request Body" body=""
	I1002 20:14:16.005232   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:16.005696   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:16.504984   32280 type.go:168] "Request Body" body=""
	I1002 20:14:16.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:16.505370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:17.004896   32280 type.go:168] "Request Body" body=""
	I1002 20:14:17.004982   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:17.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:17.504836   32280 type.go:168] "Request Body" body=""
	I1002 20:14:17.504907   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:17.505220   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:18.005868   32280 type.go:168] "Request Body" body=""
	I1002 20:14:18.005951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:18.006358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:18.006423   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:18.504940   32280 type.go:168] "Request Body" body=""
	I1002 20:14:18.505026   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:18.505333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:19.004866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:19.004945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:19.005275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:19.505078   32280 type.go:168] "Request Body" body=""
	I1002 20:14:19.505155   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:19.505483   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:20.004994   32280 type.go:168] "Request Body" body=""
	I1002 20:14:20.005076   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:20.005381   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:20.505242   32280 type.go:168] "Request Body" body=""
	I1002 20:14:20.505307   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:20.505631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:20.505718   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:21.005226   32280 type.go:168] "Request Body" body=""
	I1002 20:14:21.005289   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:21.005590   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:21.505335   32280 type.go:168] "Request Body" body=""
	I1002 20:14:21.505404   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:21.505749   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:22.005375   32280 type.go:168] "Request Body" body=""
	I1002 20:14:22.005439   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:22.005744   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:22.505304   32280 type.go:168] "Request Body" body=""
	I1002 20:14:22.505371   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:22.505716   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:22.505771   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:23.005272   32280 type.go:168] "Request Body" body=""
	I1002 20:14:23.005334   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:23.005644   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:23.505227   32280 type.go:168] "Request Body" body=""
	I1002 20:14:23.505324   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:23.505721   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:24.005280   32280 type.go:168] "Request Body" body=""
	I1002 20:14:24.005348   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:24.005690   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:24.505614   32280 type.go:168] "Request Body" body=""
	I1002 20:14:24.505707   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:24.506064   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:24.506123   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:25.005722   32280 type.go:168] "Request Body" body=""
	I1002 20:14:25.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:25.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:25.505754   32280 type.go:168] "Request Body" body=""
	I1002 20:14:25.505821   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:25.506147   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:26.005768   32280 type.go:168] "Request Body" body=""
	I1002 20:14:26.005838   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:26.006153   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:26.505742   32280 type.go:168] "Request Body" body=""
	I1002 20:14:26.505810   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:26.506121   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:26.506173   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:27.005763   32280 type.go:168] "Request Body" body=""
	I1002 20:14:27.005839   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:27.006182   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:27.505814   32280 type.go:168] "Request Body" body=""
	I1002 20:14:27.505878   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:27.506202   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:28.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:28.005938   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:28.006243   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:28.504821   32280 type.go:168] "Request Body" body=""
	I1002 20:14:28.504889   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:28.505244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:29.005929   32280 type.go:168] "Request Body" body=""
	I1002 20:14:29.005998   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:29.006317   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:29.006373   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:29.505885   32280 type.go:168] "Request Body" body=""
	I1002 20:14:29.505955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:29.506284   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:30.004871   32280 type.go:168] "Request Body" body=""
	I1002 20:14:30.004946   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:30.005283   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:30.505131   32280 type.go:168] "Request Body" body=""
	I1002 20:14:30.505212   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:30.505536   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:31.005137   32280 type.go:168] "Request Body" body=""
	I1002 20:14:31.005230   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:31.005549   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:31.505115   32280 type.go:168] "Request Body" body=""
	I1002 20:14:31.505177   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:31.505493   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:31.505544   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:32.005077   32280 type.go:168] "Request Body" body=""
	I1002 20:14:32.005142   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:32.005447   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:32.505767   32280 type.go:168] "Request Body" body=""
	I1002 20:14:32.505835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:32.506138   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:33.005842   32280 type.go:168] "Request Body" body=""
	I1002 20:14:33.005927   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:33.006231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:33.505868   32280 type.go:168] "Request Body" body=""
	I1002 20:14:33.505947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:33.506252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:33.506315   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:34.004818   32280 type.go:168] "Request Body" body=""
	I1002 20:14:34.004919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:34.005210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:34.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:14:34.505008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:34.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:35.004949   32280 type.go:168] "Request Body" body=""
	I1002 20:14:35.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:35.005319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:35.505837   32280 type.go:168] "Request Body" body=""
	I1002 20:14:35.505935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:35.506248   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:36.005867   32280 type.go:168] "Request Body" body=""
	I1002 20:14:36.005936   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:36.006232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:36.006283   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:36.505902   32280 type.go:168] "Request Body" body=""
	I1002 20:14:36.506056   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:36.506384   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:37.004951   32280 type.go:168] "Request Body" body=""
	I1002 20:14:37.005021   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:37.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:37.504906   32280 type.go:168] "Request Body" body=""
	I1002 20:14:37.504995   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:37.505334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:38.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:14:38.004944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:38.005255   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:38.504831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:38.504917   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:38.505277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:38.505331   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:39.004819   32280 type.go:168] "Request Body" body=""
	I1002 20:14:39.004911   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:39.005204   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:39.505017   32280 type.go:168] "Request Body" body=""
	I1002 20:14:39.505087   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:39.505399   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:40.005080   32280 type.go:168] "Request Body" body=""
	I1002 20:14:40.005144   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:40.005445   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:40.505248   32280 type.go:168] "Request Body" body=""
	I1002 20:14:40.505310   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:40.505614   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:40.505711   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:41.005196   32280 type.go:168] "Request Body" body=""
	I1002 20:14:41.005309   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:41.005631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:41.505223   32280 type.go:168] "Request Body" body=""
	I1002 20:14:41.505304   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:41.505623   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:42.005154   32280 type.go:168] "Request Body" body=""
	I1002 20:14:42.005238   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:42.005535   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:42.505095   32280 type.go:168] "Request Body" body=""
	I1002 20:14:42.505175   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:42.505514   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:43.005064   32280 type.go:168] "Request Body" body=""
	I1002 20:14:43.005128   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:43.005441   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:43.005493   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:43.504991   32280 type.go:168] "Request Body" body=""
	I1002 20:14:43.505079   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:43.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:44.004948   32280 type.go:168] "Request Body" body=""
	I1002 20:14:44.005018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:44.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:44.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:14:44.505109   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:44.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:45.004946   32280 type.go:168] "Request Body" body=""
	I1002 20:14:45.005005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:45.005307   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:45.504859   32280 type.go:168] "Request Body" body=""
	I1002 20:14:45.504931   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:45.505245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:45.505309   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:46.005851   32280 type.go:168] "Request Body" body=""
	I1002 20:14:46.005934   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:46.006245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:46.505842   32280 type.go:168] "Request Body" body=""
	I1002 20:14:46.505929   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:46.506226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:47.005902   32280 type.go:168] "Request Body" body=""
	I1002 20:14:47.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:47.006270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:47.504848   32280 type.go:168] "Request Body" body=""
	I1002 20:14:47.504912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:47.505208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:48.005819   32280 type.go:168] "Request Body" body=""
	I1002 20:14:48.005910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:48.006200   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:48.006262   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:48.504839   32280 type.go:168] "Request Body" body=""
	I1002 20:14:48.504925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:48.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:49.004816   32280 type.go:168] "Request Body" body=""
	I1002 20:14:49.004911   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:49.005214   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:49.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:14:49.505022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:49.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:50.004888   32280 type.go:168] "Request Body" body=""
	I1002 20:14:50.004963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:50.005258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:50.505167   32280 type.go:168] "Request Body" body=""
	I1002 20:14:50.505271   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:50.505603   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:50.505700   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:51.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:14:51.005941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:51.006228   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:51.505859   32280 type.go:168] "Request Body" body=""
	I1002 20:14:51.505973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:51.506301   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:52.004831   32280 type.go:168] "Request Body" body=""
	I1002 20:14:52.004912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:52.005216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:52.504814   32280 type.go:168] "Request Body" body=""
	I1002 20:14:52.504898   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:52.505216   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:53.005826   32280 type.go:168] "Request Body" body=""
	I1002 20:14:53.005886   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:53.006180   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:53.006232   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:53.505812   32280 type.go:168] "Request Body" body=""
	I1002 20:14:53.505888   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:53.506201   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:54.005808   32280 type.go:168] "Request Body" body=""
	I1002 20:14:54.005871   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:54.006166   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:54.504871   32280 type.go:168] "Request Body" body=""
	I1002 20:14:54.504938   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:54.505247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:55.004807   32280 type.go:168] "Request Body" body=""
	I1002 20:14:55.004892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:55.005219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:55.505889   32280 type.go:168] "Request Body" body=""
	I1002 20:14:55.505973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:55.506277   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:55.506339   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:56.004856   32280 type.go:168] "Request Body" body=""
	I1002 20:14:56.004932   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:56.005222   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:56.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:14:56.504963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:56.505264   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:57.004822   32280 type.go:168] "Request Body" body=""
	I1002 20:14:57.004940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:57.005238   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:57.505875   32280 type.go:168] "Request Body" body=""
	I1002 20:14:57.505940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:57.506273   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:58.005858   32280 type.go:168] "Request Body" body=""
	I1002 20:14:58.005932   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:58.006233   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:14:58.006297   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:14:58.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:14:58.504910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:58.505221   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:59.005853   32280 type.go:168] "Request Body" body=""
	I1002 20:14:59.005916   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:59.006215   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:14:59.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:14:59.505079   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:14:59.505422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:00.005901   32280 type.go:168] "Request Body" body=""
	I1002 20:15:00.005989   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:00.006298   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:00.006348   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:00.505148   32280 type.go:168] "Request Body" body=""
	I1002 20:15:00.505241   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:00.505605   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:01.005169   32280 type.go:168] "Request Body" body=""
	I1002 20:15:01.005247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:01.005557   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:01.505254   32280 type.go:168] "Request Body" body=""
	I1002 20:15:01.505323   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:01.505705   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:02.004981   32280 type.go:168] "Request Body" body=""
	I1002 20:15:02.005068   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:02.005397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:02.505008   32280 type.go:168] "Request Body" body=""
	I1002 20:15:02.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:02.505394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:02.505450   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:03.004993   32280 type.go:168] "Request Body" body=""
	I1002 20:15:03.005058   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:03.005394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:03.504950   32280 type.go:168] "Request Body" body=""
	I1002 20:15:03.505020   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:03.505326   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:04.004918   32280 type.go:168] "Request Body" body=""
	I1002 20:15:04.004994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:04.005296   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:04.504973   32280 type.go:168] "Request Body" body=""
	I1002 20:15:04.505039   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:04.505347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:05.004936   32280 type.go:168] "Request Body" body=""
	I1002 20:15:05.005003   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:05.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:05.005362   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:05.504869   32280 type.go:168] "Request Body" body=""
	I1002 20:15:05.504948   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:05.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:06.004882   32280 type.go:168] "Request Body" body=""
	I1002 20:15:06.004968   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:06.005279   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:06.504947   32280 type.go:168] "Request Body" body=""
	I1002 20:15:06.505019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:06.505334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:07.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:15:07.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:07.005310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:07.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:15:07.505006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:07.505377   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:07.505433   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:08.004961   32280 type.go:168] "Request Body" body=""
	I1002 20:15:08.005028   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:08.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:08.504957   32280 type.go:168] "Request Body" body=""
	I1002 20:15:08.505046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:08.505358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:09.004947   32280 type.go:168] "Request Body" body=""
	I1002 20:15:09.005025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:09.005346   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:09.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:15:09.505247   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:09.505575   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:09.505626   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:10.005155   32280 type.go:168] "Request Body" body=""
	I1002 20:15:10.005219   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:10.005531   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:10.505400   32280 type.go:168] "Request Body" body=""
	I1002 20:15:10.505469   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:10.505813   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:11.005490   32280 type.go:168] "Request Body" body=""
	I1002 20:15:11.005553   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:11.005896   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:11.505548   32280 type.go:168] "Request Body" body=""
	I1002 20:15:11.505612   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:11.505961   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:11.506027   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:12.005617   32280 type.go:168] "Request Body" body=""
	I1002 20:15:12.005691   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:12.005983   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:12.505700   32280 type.go:168] "Request Body" body=""
	I1002 20:15:12.505770   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:12.506098   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:13.005755   32280 type.go:168] "Request Body" body=""
	I1002 20:15:13.005828   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:13.006168   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:13.505836   32280 type.go:168] "Request Body" body=""
	I1002 20:15:13.505920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:13.506241   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:13.506290   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:14.005887   32280 type.go:168] "Request Body" body=""
	I1002 20:15:14.005962   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:14.006270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:14.505064   32280 type.go:168] "Request Body" body=""
	I1002 20:15:14.505129   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:14.505450   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:15.004995   32280 type.go:168] "Request Body" body=""
	I1002 20:15:15.005063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:15.005377   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:15.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:15:15.504986   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:15.505294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:16.004941   32280 type.go:168] "Request Body" body=""
	I1002 20:15:16.005008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:16.005312   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:16.005376   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:16.504960   32280 type.go:168] "Request Body" body=""
	I1002 20:15:16.505033   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:16.505386   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:17.005033   32280 type.go:168] "Request Body" body=""
	I1002 20:15:17.005095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:17.005406   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:17.504971   32280 type.go:168] "Request Body" body=""
	I1002 20:15:17.505037   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:17.505351   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:18.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:15:18.005879   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:18.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:18.006247   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:18.505849   32280 type.go:168] "Request Body" body=""
	I1002 20:15:18.505919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:18.506247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:19.004886   32280 type.go:168] "Request Body" body=""
	I1002 20:15:19.004961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:19.005261   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:19.505076   32280 type.go:168] "Request Body" body=""
	I1002 20:15:19.505144   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:19.505477   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:20.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:15:20.005071   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:20.005381   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:20.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:15:20.505251   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:20.505582   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:20.505635   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:21.004962   32280 type.go:168] "Request Body" body=""
	I1002 20:15:21.005029   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:21.005332   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:21.504914   32280 type.go:168] "Request Body" body=""
	I1002 20:15:21.504977   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:21.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:22.004889   32280 type.go:168] "Request Body" body=""
	I1002 20:15:22.004987   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:22.005283   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:22.504874   32280 type.go:168] "Request Body" body=""
	I1002 20:15:22.504937   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:22.505267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:23.004838   32280 type.go:168] "Request Body" body=""
	I1002 20:15:23.004900   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:23.005227   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:23.005283   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:23.505836   32280 type.go:168] "Request Body" body=""
	I1002 20:15:23.505908   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:23.506231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:24.005841   32280 type.go:168] "Request Body" body=""
	I1002 20:15:24.005903   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:24.006198   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:24.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:15:24.505059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:24.505375   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:25.004926   32280 type.go:168] "Request Body" body=""
	I1002 20:15:25.005003   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:25.005304   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:25.005362   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:25.504905   32280 type.go:168] "Request Body" body=""
	I1002 20:15:25.504971   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:25.505275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:26.004817   32280 type.go:168] "Request Body" body=""
	I1002 20:15:26.004887   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:26.005210   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:26.505879   32280 type.go:168] "Request Body" body=""
	I1002 20:15:26.506038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:26.506430   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:27.005027   32280 type.go:168] "Request Body" body=""
	I1002 20:15:27.005114   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:27.005415   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:27.005474   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:27.505002   32280 type.go:168] "Request Body" body=""
	I1002 20:15:27.505082   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:27.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:28.004986   32280 type.go:168] "Request Body" body=""
	I1002 20:15:28.005053   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:28.005352   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:28.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:15:28.505000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:28.505364   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:29.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:15:29.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:29.005308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:29.505191   32280 type.go:168] "Request Body" body=""
	I1002 20:15:29.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:29.505637   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:29.505741   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:30.005210   32280 type.go:168] "Request Body" body=""
	I1002 20:15:30.005271   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:30.005562   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:30.505505   32280 type.go:168] "Request Body" body=""
	I1002 20:15:30.505575   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:30.505938   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:31.005554   32280 type.go:168] "Request Body" body=""
	I1002 20:15:31.005640   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:31.005967   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:31.505585   32280 type.go:168] "Request Body" body=""
	I1002 20:15:31.505683   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:31.506006   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:31.506056   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:32.005634   32280 type.go:168] "Request Body" body=""
	I1002 20:15:32.005710   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:32.006002   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:32.505666   32280 type.go:168] "Request Body" body=""
	I1002 20:15:32.505734   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:32.506032   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:33.005694   32280 type.go:168] "Request Body" body=""
	I1002 20:15:33.005768   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:33.006118   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:33.505738   32280 type.go:168] "Request Body" body=""
	I1002 20:15:33.505801   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:33.506120   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:33.506192   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:34.005749   32280 type.go:168] "Request Body" body=""
	I1002 20:15:34.005835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:34.006190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:34.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:15:34.505063   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:34.505359   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:35.004979   32280 type.go:168] "Request Body" body=""
	I1002 20:15:35.005040   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:35.005361   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:35.504958   32280 type.go:168] "Request Body" body=""
	I1002 20:15:35.505028   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:35.505325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:36.004893   32280 type.go:168] "Request Body" body=""
	I1002 20:15:36.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:36.005275   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:36.005327   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:36.504861   32280 type.go:168] "Request Body" body=""
	I1002 20:15:36.504942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:36.505241   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:37.004818   32280 type.go:168] "Request Body" body=""
	I1002 20:15:37.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:37.005203   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:37.504876   32280 type.go:168] "Request Body" body=""
	I1002 20:15:37.504951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:37.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:38.004888   32280 type.go:168] "Request Body" body=""
	I1002 20:15:38.004979   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:38.005286   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:38.504969   32280 type.go:168] "Request Body" body=""
	I1002 20:15:38.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:38.505376   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:38.505429   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:39.004950   32280 type.go:168] "Request Body" body=""
	I1002 20:15:39.005018   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:39.005330   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:39.505071   32280 type.go:168] "Request Body" body=""
	I1002 20:15:39.505137   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:39.505431   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:40.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:15:40.005090   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:40.005385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:40.505098   32280 type.go:168] "Request Body" body=""
	I1002 20:15:40.505197   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:40.505502   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:40.505558   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:41.005068   32280 type.go:168] "Request Body" body=""
	I1002 20:15:41.005132   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:41.005435   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:41.505003   32280 type.go:168] "Request Body" body=""
	I1002 20:15:41.505067   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:41.505459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:42.005029   32280 type.go:168] "Request Body" body=""
	I1002 20:15:42.005101   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:42.005410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:42.505061   32280 type.go:168] "Request Body" body=""
	I1002 20:15:42.505128   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:42.505440   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:43.005053   32280 type.go:168] "Request Body" body=""
	I1002 20:15:43.005164   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:43.005534   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:43.005626   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:43.505101   32280 type.go:168] "Request Body" body=""
	I1002 20:15:43.505195   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:43.505496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:44.005084   32280 type.go:168] "Request Body" body=""
	I1002 20:15:44.005178   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:44.005496   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:44.505460   32280 type.go:168] "Request Body" body=""
	I1002 20:15:44.505524   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:44.505855   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:45.005560   32280 type.go:168] "Request Body" body=""
	I1002 20:15:45.005631   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:45.005984   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:45.006035   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:45.505602   32280 type.go:168] "Request Body" body=""
	I1002 20:15:45.505705   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:45.506005   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:46.005627   32280 type.go:168] "Request Body" body=""
	I1002 20:15:46.005713   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:46.006024   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:46.505689   32280 type.go:168] "Request Body" body=""
	I1002 20:15:46.505755   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:46.506045   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:47.005272   32280 type.go:168] "Request Body" body=""
	I1002 20:15:47.005340   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:47.005666   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:47.505213   32280 type.go:168] "Request Body" body=""
	I1002 20:15:47.505283   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:47.505638   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:47.505724   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:48.004992   32280 type.go:168] "Request Body" body=""
	I1002 20:15:48.005062   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:48.005371   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:48.504960   32280 type.go:168] "Request Body" body=""
	I1002 20:15:48.505025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:48.505343   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:49.004918   32280 type.go:168] "Request Body" body=""
	I1002 20:15:49.004982   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:49.005325   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:49.505056   32280 type.go:168] "Request Body" body=""
	I1002 20:15:49.505122   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:49.505424   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:50.004984   32280 type.go:168] "Request Body" body=""
	I1002 20:15:50.005048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:50.005347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:50.005399   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:50.505099   32280 type.go:168] "Request Body" body=""
	I1002 20:15:50.505173   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:50.505478   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:51.005059   32280 type.go:168] "Request Body" body=""
	I1002 20:15:51.005133   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:51.005463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:51.505016   32280 type.go:168] "Request Body" body=""
	I1002 20:15:51.505084   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:51.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:52.005067   32280 type.go:168] "Request Body" body=""
	I1002 20:15:52.005155   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:52.005476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:52.005533   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:52.505040   32280 type.go:168] "Request Body" body=""
	I1002 20:15:52.505105   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:52.505403   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:53.004962   32280 type.go:168] "Request Body" body=""
	I1002 20:15:53.005025   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:53.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:53.504924   32280 type.go:168] "Request Body" body=""
	I1002 20:15:53.505008   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:53.505327   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:54.004900   32280 type.go:168] "Request Body" body=""
	I1002 20:15:54.004970   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:54.005314   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:54.505066   32280 type.go:168] "Request Body" body=""
	I1002 20:15:54.505137   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:54.505438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:54.505496   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:55.005002   32280 type.go:168] "Request Body" body=""
	I1002 20:15:55.005067   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:55.005372   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:55.504901   32280 type.go:168] "Request Body" body=""
	I1002 20:15:55.504971   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:55.505282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:56.004915   32280 type.go:168] "Request Body" body=""
	I1002 20:15:56.004985   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:56.005314   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:56.504880   32280 type.go:168] "Request Body" body=""
	I1002 20:15:56.504955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:56.505267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:57.004835   32280 type.go:168] "Request Body" body=""
	I1002 20:15:57.004920   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:57.005242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:57.005291   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:57.505877   32280 type.go:168] "Request Body" body=""
	I1002 20:15:57.505940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:57.506245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:58.005907   32280 type.go:168] "Request Body" body=""
	I1002 20:15:58.005991   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:58.006342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:58.504964   32280 type.go:168] "Request Body" body=""
	I1002 20:15:58.505032   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:58.505329   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:15:59.004907   32280 type.go:168] "Request Body" body=""
	I1002 20:15:59.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:59.005333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:15:59.005397   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:15:59.505208   32280 type.go:168] "Request Body" body=""
	I1002 20:15:59.505273   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:15:59.505578   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:00.005007   32280 type.go:168] "Request Body" body=""
	I1002 20:16:00.005070   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:00.005368   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:00.505154   32280 type.go:168] "Request Body" body=""
	I1002 20:16:00.505223   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:00.505548   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:01.005111   32280 type.go:168] "Request Body" body=""
	I1002 20:16:01.005187   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:01.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:01.005546   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:01.505154   32280 type.go:168] "Request Body" body=""
	I1002 20:16:01.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:01.505529   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:02.005146   32280 type.go:168] "Request Body" body=""
	I1002 20:16:02.005224   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:02.005550   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:02.505113   32280 type.go:168] "Request Body" body=""
	I1002 20:16:02.505181   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:02.505501   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:03.005066   32280 type.go:168] "Request Body" body=""
	I1002 20:16:03.005132   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:03.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:03.505093   32280 type.go:168] "Request Body" body=""
	I1002 20:16:03.505162   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:03.505508   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:03.505564   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:04.005055   32280 type.go:168] "Request Body" body=""
	I1002 20:16:04.005119   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:04.005406   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:04.505180   32280 type.go:168] "Request Body" body=""
	I1002 20:16:04.505248   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:04.505566   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:05.005130   32280 type.go:168] "Request Body" body=""
	I1002 20:16:05.005192   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:05.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:05.505063   32280 type.go:168] "Request Body" body=""
	I1002 20:16:05.505131   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:05.505442   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:06.005022   32280 type.go:168] "Request Body" body=""
	I1002 20:16:06.005086   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:06.005392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:06.005444   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:06.505030   32280 type.go:168] "Request Body" body=""
	I1002 20:16:06.505095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:06.505395   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:07.004971   32280 type.go:168] "Request Body" body=""
	I1002 20:16:07.005038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:07.005337   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:07.504911   32280 type.go:168] "Request Body" body=""
	I1002 20:16:07.505004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:07.505316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:08.004917   32280 type.go:168] "Request Body" body=""
	I1002 20:16:08.004990   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:08.005311   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:08.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:16:08.504958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:08.505256   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:08.505311   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:09.005884   32280 type.go:168] "Request Body" body=""
	I1002 20:16:09.005950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:09.006258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:09.505071   32280 type.go:168] "Request Body" body=""
	I1002 20:16:09.505141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:09.505485   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:10.005085   32280 type.go:168] "Request Body" body=""
	I1002 20:16:10.005150   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:10.005494   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:10.505286   32280 type.go:168] "Request Body" body=""
	I1002 20:16:10.505357   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:10.505685   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:10.505751   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:11.005245   32280 type.go:168] "Request Body" body=""
	I1002 20:16:11.005311   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:11.005606   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:11.505183   32280 type.go:168] "Request Body" body=""
	I1002 20:16:11.505245   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:11.505547   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:12.005105   32280 type.go:168] "Request Body" body=""
	I1002 20:16:12.005169   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:12.005459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:12.505029   32280 type.go:168] "Request Body" body=""
	I1002 20:16:12.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:12.505392   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:13.005040   32280 type.go:168] "Request Body" body=""
	I1002 20:16:13.005104   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:13.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:13.005474   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:13.504990   32280 type.go:168] "Request Body" body=""
	I1002 20:16:13.505055   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:13.505357   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:14.004946   32280 type.go:168] "Request Body" body=""
	I1002 20:16:14.005015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:14.005324   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:14.505076   32280 type.go:168] "Request Body" body=""
	I1002 20:16:14.505142   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:14.505433   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:15.005063   32280 type.go:168] "Request Body" body=""
	I1002 20:16:15.005134   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:15.005446   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:15.005498   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:15.504952   32280 type.go:168] "Request Body" body=""
	I1002 20:16:15.505022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:15.505328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:16.004912   32280 type.go:168] "Request Body" body=""
	I1002 20:16:16.004990   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:16.005339   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:16.505464   32280 type.go:168] "Request Body" body=""
	I1002 20:16:16.505571   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:16.505963   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:17.005818   32280 type.go:168] "Request Body" body=""
	I1002 20:16:17.005930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:17.006240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:17.006295   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:17.504827   32280 type.go:168] "Request Body" body=""
	I1002 20:16:17.504891   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:17.505213   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:18.005877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:18.005946   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:18.006281   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:18.505257   32280 type.go:168] "Request Body" body=""
	I1002 20:16:18.505334   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:18.505711   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:19.005252   32280 type.go:168] "Request Body" body=""
	I1002 20:16:19.005317   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:19.005634   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:19.505459   32280 type.go:168] "Request Body" body=""
	I1002 20:16:19.505521   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:19.505917   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:19.505979   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:20.005531   32280 type.go:168] "Request Body" body=""
	I1002 20:16:20.005594   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:20.005938   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:20.505740   32280 type.go:168] "Request Body" body=""
	I1002 20:16:20.505803   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:20.506120   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:21.005728   32280 type.go:168] "Request Body" body=""
	I1002 20:16:21.005789   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:21.006134   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:21.505734   32280 type.go:168] "Request Body" body=""
	I1002 20:16:21.505799   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:21.506152   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:21.506214   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:22.005776   32280 type.go:168] "Request Body" body=""
	I1002 20:16:22.005835   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:22.006129   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:22.505854   32280 type.go:168] "Request Body" body=""
	I1002 20:16:22.505921   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:22.506271   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:23.004819   32280 type.go:168] "Request Body" body=""
	I1002 20:16:23.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:23.005226   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:23.504886   32280 type.go:168] "Request Body" body=""
	I1002 20:16:23.504953   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:23.505310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:24.004892   32280 type.go:168] "Request Body" body=""
	I1002 20:16:24.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:24.005258   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:24.005327   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:24.505080   32280 type.go:168] "Request Body" body=""
	I1002 20:16:24.505161   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:24.505504   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:25.005053   32280 type.go:168] "Request Body" body=""
	I1002 20:16:25.005119   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:25.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:25.505026   32280 type.go:168] "Request Body" body=""
	I1002 20:16:25.505087   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:25.505410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:26.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:16:26.005021   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:26.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:26.005378   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:26.504910   32280 type.go:168] "Request Body" body=""
	I1002 20:16:26.504977   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:26.505326   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:27.005842   32280 type.go:168] "Request Body" body=""
	I1002 20:16:27.005906   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:27.006192   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:27.505877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:27.505952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:27.506276   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:28.004832   32280 type.go:168] "Request Body" body=""
	I1002 20:16:28.004908   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:28.005212   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:28.505846   32280 type.go:168] "Request Body" body=""
	I1002 20:16:28.505928   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:28.506279   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:28.506330   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:29.004829   32280 type.go:168] "Request Body" body=""
	I1002 20:16:29.004904   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:29.005217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:29.505056   32280 type.go:168] "Request Body" body=""
	I1002 20:16:29.505125   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:29.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:30.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:16:30.005075   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:30.005370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:30.505105   32280 type.go:168] "Request Body" body=""
	I1002 20:16:30.505170   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:30.505455   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:31.005091   32280 type.go:168] "Request Body" body=""
	I1002 20:16:31.005160   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:31.005463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:31.005521   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:31.504995   32280 type.go:168] "Request Body" body=""
	I1002 20:16:31.505061   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:31.505362   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:32.005845   32280 type.go:168] "Request Body" body=""
	I1002 20:16:32.005909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:32.006188   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:32.505814   32280 type.go:168] "Request Body" body=""
	I1002 20:16:32.505878   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:32.506185   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:33.005817   32280 type.go:168] "Request Body" body=""
	I1002 20:16:33.005884   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:33.006190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:33.006257   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:33.505816   32280 type.go:168] "Request Body" body=""
	I1002 20:16:33.505892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:33.506205   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:34.005835   32280 type.go:168] "Request Body" body=""
	I1002 20:16:34.005898   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:34.006219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:34.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:16:34.504996   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:34.505358   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:35.004928   32280 type.go:168] "Request Body" body=""
	I1002 20:16:35.005004   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:35.005345   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:35.504930   32280 type.go:168] "Request Body" body=""
	I1002 20:16:35.504994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:35.505319   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:35.505372   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:36.004925   32280 type.go:168] "Request Body" body=""
	I1002 20:16:36.004992   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:36.005316   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:36.504877   32280 type.go:168] "Request Body" body=""
	I1002 20:16:36.504954   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:36.505294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:37.005839   32280 type.go:168] "Request Body" body=""
	I1002 20:16:37.005910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:37.006248   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:37.505873   32280 type.go:168] "Request Body" body=""
	I1002 20:16:37.505941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:37.506266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:37.506318   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:38.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:16:38.005944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:38.006246   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:38.504902   32280 type.go:168] "Request Body" body=""
	I1002 20:16:38.504969   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:38.505303   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:39.004874   32280 type.go:168] "Request Body" body=""
	I1002 20:16:39.004947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:39.005260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:39.505046   32280 type.go:168] "Request Body" body=""
	I1002 20:16:39.505118   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:39.505463   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:40.004989   32280 type.go:168] "Request Body" body=""
	I1002 20:16:40.005054   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:40.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:40.005393   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:40.505166   32280 type.go:168] "Request Body" body=""
	I1002 20:16:40.505235   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:40.505560   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:41.005152   32280 type.go:168] "Request Body" body=""
	I1002 20:16:41.005218   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:41.005554   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:41.505090   32280 type.go:168] "Request Body" body=""
	I1002 20:16:41.505158   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:41.505444   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:42.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:16:42.005138   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:42.005449   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:42.005504   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:42.505067   32280 type.go:168] "Request Body" body=""
	I1002 20:16:42.505134   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:42.505424   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:43.004978   32280 type.go:168] "Request Body" body=""
	I1002 20:16:43.005045   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:43.005360   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:43.504918   32280 type.go:168] "Request Body" body=""
	I1002 20:16:43.504994   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:43.505315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:44.004897   32280 type.go:168] "Request Body" body=""
	I1002 20:16:44.004973   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:44.005278   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:44.505052   32280 type.go:168] "Request Body" body=""
	I1002 20:16:44.505115   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:44.505420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:44.505478   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:45.004947   32280 type.go:168] "Request Body" body=""
	I1002 20:16:45.005019   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:45.005322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:45.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:16:45.504993   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:45.505338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:46.004905   32280 type.go:168] "Request Body" body=""
	I1002 20:16:46.004979   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:46.005286   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:46.504835   32280 type.go:168] "Request Body" body=""
	I1002 20:16:46.504925   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:46.505219   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:47.005826   32280 type.go:168] "Request Body" body=""
	I1002 20:16:47.005892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:47.006200   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:47.006269   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:47.505816   32280 type.go:168] "Request Body" body=""
	I1002 20:16:47.505884   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:47.506197   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:48.005806   32280 type.go:168] "Request Body" body=""
	I1002 20:16:48.005870   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:48.006179   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:48.505827   32280 type.go:168] "Request Body" body=""
	I1002 20:16:48.505888   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:48.506194   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:49.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:16:49.005894   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:49.006203   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:49.504963   32280 type.go:168] "Request Body" body=""
	I1002 20:16:49.505034   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:49.505380   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:49.505431   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:50.004940   32280 type.go:168] "Request Body" body=""
	I1002 20:16:50.005017   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:50.005304   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:50.505134   32280 type.go:168] "Request Body" body=""
	I1002 20:16:50.505201   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:50.505531   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:51.005099   32280 type.go:168] "Request Body" body=""
	I1002 20:16:51.005174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:51.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:51.505049   32280 type.go:168] "Request Body" body=""
	I1002 20:16:51.505116   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:51.505426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:51.505479   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:52.005030   32280 type.go:168] "Request Body" body=""
	I1002 20:16:52.005095   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:52.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:52.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:16:52.505051   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:52.505356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:53.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:16:53.005183   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:53.005527   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:53.504966   32280 type.go:168] "Request Body" body=""
	I1002 20:16:53.505048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:53.505395   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:54.004967   32280 type.go:168] "Request Body" body=""
	I1002 20:16:54.005059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:54.005352   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:54.005404   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:54.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:16:54.505052   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:54.505382   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:55.005054   32280 type.go:168] "Request Body" body=""
	I1002 20:16:55.005127   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:55.005439   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:55.505031   32280 type.go:168] "Request Body" body=""
	I1002 20:16:55.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:55.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:56.004980   32280 type.go:168] "Request Body" body=""
	I1002 20:16:56.005046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:56.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:56.504959   32280 type.go:168] "Request Body" body=""
	I1002 20:16:56.505036   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:56.505388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:56.505446   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:57.004963   32280 type.go:168] "Request Body" body=""
	I1002 20:16:57.005036   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:57.005356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:57.504925   32280 type.go:168] "Request Body" body=""
	I1002 20:16:57.505000   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:57.505300   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:58.004883   32280 type.go:168] "Request Body" body=""
	I1002 20:16:58.004958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:58.005266   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:58.504830   32280 type.go:168] "Request Body" body=""
	I1002 20:16:58.504897   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:58.505217   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:16:59.005867   32280 type.go:168] "Request Body" body=""
	I1002 20:16:59.005934   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:59.006232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:16:59.006289   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:16:59.505004   32280 type.go:168] "Request Body" body=""
	I1002 20:16:59.505077   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:16:59.505422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:00.004977   32280 type.go:168] "Request Body" body=""
	I1002 20:17:00.005041   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:00.005341   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:00.505177   32280 type.go:168] "Request Body" body=""
	I1002 20:17:00.505244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:00.505577   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:01.005106   32280 type.go:168] "Request Body" body=""
	I1002 20:17:01.005177   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:01.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:01.505109   32280 type.go:168] "Request Body" body=""
	I1002 20:17:01.505191   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:01.505585   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:01.505680   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:02.005132   32280 type.go:168] "Request Body" body=""
	I1002 20:17:02.005199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:02.005526   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:02.505094   32280 type.go:168] "Request Body" body=""
	I1002 20:17:02.505168   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:02.505564   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:03.005060   32280 type.go:168] "Request Body" body=""
	I1002 20:17:03.005126   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:03.005440   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:03.504982   32280 type.go:168] "Request Body" body=""
	I1002 20:17:03.505055   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:03.505393   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:04.004973   32280 type.go:168] "Request Body" body=""
	I1002 20:17:04.005038   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:04.005353   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:04.005404   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:04.505123   32280 type.go:168] "Request Body" body=""
	I1002 20:17:04.505251   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:04.505555   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:05.005089   32280 type.go:168] "Request Body" body=""
	I1002 20:17:05.005151   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:05.005451   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:05.505031   32280 type.go:168] "Request Body" body=""
	I1002 20:17:05.505104   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:05.505423   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:06.004950   32280 type.go:168] "Request Body" body=""
	I1002 20:17:06.005039   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:06.005333   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:06.504958   32280 type.go:168] "Request Body" body=""
	I1002 20:17:06.505029   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:06.505369   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:06.505429   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:07.004923   32280 type.go:168] "Request Body" body=""
	I1002 20:17:07.004993   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:07.005301   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:07.504862   32280 type.go:168] "Request Body" body=""
	I1002 20:17:07.504930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:07.505255   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:08.004807   32280 type.go:168] "Request Body" body=""
	I1002 20:17:08.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:08.005186   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:08.505831   32280 type.go:168] "Request Body" body=""
	I1002 20:17:08.505899   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:08.506230   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:08.506299   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:09.005828   32280 type.go:168] "Request Body" body=""
	I1002 20:17:09.005891   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:09.006223   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:09.505024   32280 type.go:168] "Request Body" body=""
	I1002 20:17:09.505092   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:09.505459   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:10.005013   32280 type.go:168] "Request Body" body=""
	I1002 20:17:10.005077   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:10.005388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:10.505140   32280 type.go:168] "Request Body" body=""
	I1002 20:17:10.505212   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:10.505598   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:11.005128   32280 type.go:168] "Request Body" body=""
	I1002 20:17:11.005195   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:11.005534   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:11.005597   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:11.505120   32280 type.go:168] "Request Body" body=""
	I1002 20:17:11.505189   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:11.505524   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:12.005153   32280 type.go:168] "Request Body" body=""
	I1002 20:17:12.005225   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:12.005562   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:12.505110   32280 type.go:168] "Request Body" body=""
	I1002 20:17:12.505174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:12.505532   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:13.005106   32280 type.go:168] "Request Body" body=""
	I1002 20:17:13.005174   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:13.005476   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:13.505007   32280 type.go:168] "Request Body" body=""
	I1002 20:17:13.505068   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:13.505435   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:13.505488   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:14.005005   32280 type.go:168] "Request Body" body=""
	I1002 20:17:14.005066   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:14.005383   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:14.505172   32280 type.go:168] "Request Body" body=""
	I1002 20:17:14.505244   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:14.505573   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:15.005134   32280 type.go:168] "Request Body" body=""
	I1002 20:17:15.005205   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:15.005503   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:15.505066   32280 type.go:168] "Request Body" body=""
	I1002 20:17:15.505141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:15.505446   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:15.505511   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:16.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:17:16.005080   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:16.005386   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:16.504935   32280 type.go:168] "Request Body" body=""
	I1002 20:17:16.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:16.505327   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:17.004855   32280 type.go:168] "Request Body" body=""
	I1002 20:17:17.004919   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:17.005223   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:17.505899   32280 type.go:168] "Request Body" body=""
	I1002 20:17:17.505967   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:17.506302   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:17.506357   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:18.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:18.004943   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:18.005245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:18.504839   32280 type.go:168] "Request Body" body=""
	I1002 20:17:18.504915   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:18.505232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:19.005865   32280 type.go:168] "Request Body" body=""
	I1002 20:17:19.005947   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:19.006269   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:19.505022   32280 type.go:168] "Request Body" body=""
	I1002 20:17:19.505094   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:19.505407   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:20.004991   32280 type.go:168] "Request Body" body=""
	I1002 20:17:20.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:20.005405   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:20.005466   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:20.505228   32280 type.go:168] "Request Body" body=""
	I1002 20:17:20.505297   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:20.505591   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:21.005210   32280 type.go:168] "Request Body" body=""
	I1002 20:17:21.005276   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:21.005584   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:21.505147   32280 type.go:168] "Request Body" body=""
	I1002 20:17:21.505208   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:21.505526   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:22.005059   32280 type.go:168] "Request Body" body=""
	I1002 20:17:22.005124   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:22.005426   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:22.504985   32280 type.go:168] "Request Body" body=""
	I1002 20:17:22.505049   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:22.505347   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:22.505407   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:23.004930   32280 type.go:168] "Request Body" body=""
	I1002 20:17:23.005006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:23.005328   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:23.504881   32280 type.go:168] "Request Body" body=""
	I1002 20:17:23.504945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:23.505245   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:24.005892   32280 type.go:168] "Request Body" body=""
	I1002 20:17:24.005969   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:24.006315   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:24.505033   32280 type.go:168] "Request Body" body=""
	I1002 20:17:24.505105   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:24.505414   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:24.505472   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:25.004948   32280 type.go:168] "Request Body" body=""
	I1002 20:17:25.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:25.005380   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:25.504947   32280 type.go:168] "Request Body" body=""
	I1002 20:17:25.505016   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:25.505308   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:26.004843   32280 type.go:168] "Request Body" body=""
	I1002 20:17:26.004909   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:26.005238   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:26.504813   32280 type.go:168] "Request Body" body=""
	I1002 20:17:26.504873   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:26.505173   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:27.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:17:27.005931   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:27.006247   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:27.006305   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:27.505850   32280 type.go:168] "Request Body" body=""
	I1002 20:17:27.505914   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:27.506242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:28.004933   32280 type.go:168] "Request Body" body=""
	I1002 20:17:28.005009   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:28.005342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:28.504866   32280 type.go:168] "Request Body" body=""
	I1002 20:17:28.505005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:28.505322   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:29.004896   32280 type.go:168] "Request Body" body=""
	I1002 20:17:29.004966   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:29.005261   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:29.505004   32280 type.go:168] "Request Body" body=""
	I1002 20:17:29.505069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:29.505365   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:29.505422   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:30.004927   32280 type.go:168] "Request Body" body=""
	I1002 20:17:30.004988   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:30.005290   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:30.504959   32280 type.go:168] "Request Body" body=""
	I1002 20:17:30.505027   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:30.505340   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:31.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:31.005002   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:31.005338   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:31.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:17:31.504956   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:31.505260   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:32.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:32.004950   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:32.005251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:32.005312   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:32.505895   32280 type.go:168] "Request Body" body=""
	I1002 20:17:32.505961   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:32.506274   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:33.004876   32280 type.go:168] "Request Body" body=""
	I1002 20:17:33.004958   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:33.005280   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:33.504821   32280 type.go:168] "Request Body" body=""
	I1002 20:17:33.504892   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:33.505232   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:34.005931   32280 type.go:168] "Request Body" body=""
	I1002 20:17:34.006061   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:34.006376   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:34.006427   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:34.505046   32280 type.go:168] "Request Body" body=""
	I1002 20:17:34.505112   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:34.505397   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:35.004981   32280 type.go:168] "Request Body" body=""
	I1002 20:17:35.005045   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:35.005370   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:35.504929   32280 type.go:168] "Request Body" body=""
	I1002 20:17:35.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:35.505318   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:36.004980   32280 type.go:168] "Request Body" body=""
	I1002 20:17:36.005058   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:36.005394   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:36.504997   32280 type.go:168] "Request Body" body=""
	I1002 20:17:36.505060   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:36.505342   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:36.505398   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:37.004903   32280 type.go:168] "Request Body" body=""
	I1002 20:17:37.004978   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:37.005282   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:37.504878   32280 type.go:168] "Request Body" body=""
	I1002 20:17:37.504942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:37.505231   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:38.005855   32280 type.go:168] "Request Body" body=""
	I1002 20:17:38.005918   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:38.006208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:38.505835   32280 type.go:168] "Request Body" body=""
	I1002 20:17:38.505904   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:38.506229   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:38.506296   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:39.004853   32280 type.go:168] "Request Body" body=""
	I1002 20:17:39.004944   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:39.005263   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:39.505135   32280 type.go:168] "Request Body" body=""
	I1002 20:17:39.505217   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:39.505615   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:40.005193   32280 type.go:168] "Request Body" body=""
	I1002 20:17:40.005282   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:40.005581   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:40.505135   32280 type.go:168] "Request Body" body=""
	I1002 20:17:40.505207   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:40.505537   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:41.005103   32280 type.go:168] "Request Body" body=""
	I1002 20:17:41.005165   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:41.005505   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:41.005563   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:41.505063   32280 type.go:168] "Request Body" body=""
	I1002 20:17:41.505150   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:41.505490   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:42.005054   32280 type.go:168] "Request Body" body=""
	I1002 20:17:42.005160   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:42.005471   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:42.505019   32280 type.go:168] "Request Body" body=""
	I1002 20:17:42.505084   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:42.505402   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:43.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:43.005022   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:43.005350   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:43.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:17:43.505007   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:43.505339   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:43.505393   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:44.004924   32280 type.go:168] "Request Body" body=""
	I1002 20:17:44.005006   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:44.005323   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:44.505096   32280 type.go:168] "Request Body" body=""
	I1002 20:17:44.505171   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:44.505478   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:45.005011   32280 type.go:168] "Request Body" body=""
	I1002 20:17:45.005090   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:45.005399   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:45.504952   32280 type.go:168] "Request Body" body=""
	I1002 20:17:45.505012   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:45.505310   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:46.004864   32280 type.go:168] "Request Body" body=""
	I1002 20:17:46.004951   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:46.005294   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:46.005355   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:46.504873   32280 type.go:168] "Request Body" body=""
	I1002 20:17:46.504940   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:46.505244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:47.005848   32280 type.go:168] "Request Body" body=""
	I1002 20:17:47.005930   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:47.006252   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:47.504816   32280 type.go:168] "Request Body" body=""
	I1002 20:17:47.504905   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:47.505215   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:48.005846   32280 type.go:168] "Request Body" body=""
	I1002 20:17:48.005933   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:48.006242   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:48.006300   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:48.505916   32280 type.go:168] "Request Body" body=""
	I1002 20:17:48.505980   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:48.506270   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:49.004828   32280 type.go:168] "Request Body" body=""
	I1002 20:17:49.004910   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:49.005240   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:49.504935   32280 type.go:168] "Request Body" body=""
	I1002 20:17:49.505024   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:49.505373   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:50.004932   32280 type.go:168] "Request Body" body=""
	I1002 20:17:50.005011   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:50.005340   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:50.505078   32280 type.go:168] "Request Body" body=""
	I1002 20:17:50.505147   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:50.505479   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:50.505532   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:51.005024   32280 type.go:168] "Request Body" body=""
	I1002 20:17:51.005103   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:51.005420   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:51.504998   32280 type.go:168] "Request Body" body=""
	I1002 20:17:51.505075   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:51.505410   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:52.005000   32280 type.go:168] "Request Body" body=""
	I1002 20:17:52.005081   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:52.005428   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:52.505012   32280 type.go:168] "Request Body" body=""
	I1002 20:17:52.505100   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:52.505419   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:53.005015   32280 type.go:168] "Request Body" body=""
	I1002 20:17:53.005100   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:53.005438   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:53.005495   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:53.504988   32280 type.go:168] "Request Body" body=""
	I1002 20:17:53.505059   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:53.505385   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:54.004971   32280 type.go:168] "Request Body" body=""
	I1002 20:17:54.005048   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:54.005388   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:54.505199   32280 type.go:168] "Request Body" body=""
	I1002 20:17:54.505286   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:54.505624   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:55.005199   32280 type.go:168] "Request Body" body=""
	I1002 20:17:55.005287   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:55.005639   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:55.005734   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:55.505238   32280 type.go:168] "Request Body" body=""
	I1002 20:17:55.505303   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:55.505621   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:56.005174   32280 type.go:168] "Request Body" body=""
	I1002 20:17:56.005258   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:56.005612   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:56.505166   32280 type.go:168] "Request Body" body=""
	I1002 20:17:56.505231   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:56.505523   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:57.005076   32280 type.go:168] "Request Body" body=""
	I1002 20:17:57.005156   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:57.005631   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:57.505096   32280 type.go:168] "Request Body" body=""
	I1002 20:17:57.505167   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:57.505488   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:57.505554   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:17:58.005160   32280 type.go:168] "Request Body" body=""
	I1002 20:17:58.005227   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:58.005552   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:58.505084   32280 type.go:168] "Request Body" body=""
	I1002 20:17:58.505166   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:58.505512   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:59.005073   32280 type.go:168] "Request Body" body=""
	I1002 20:17:59.005138   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:59.005430   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:17:59.505390   32280 type.go:168] "Request Body" body=""
	I1002 20:17:59.505459   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:17:59.505823   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:17:59.505890   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:00.005468   32280 type.go:168] "Request Body" body=""
	I1002 20:18:00.005540   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:00.005877   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:00.505768   32280 type.go:168] "Request Body" body=""
	I1002 20:18:00.505843   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:00.506177   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:01.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:18:01.005945   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:01.006334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:01.504923   32280 type.go:168] "Request Body" body=""
	I1002 20:18:01.504996   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:01.505321   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:02.004953   32280 type.go:168] "Request Body" body=""
	I1002 20:18:02.005017   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:02.005334   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:02.005385   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:02.504884   32280 type.go:168] "Request Body" body=""
	I1002 20:18:02.504963   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:02.505259   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:03.004934   32280 type.go:168] "Request Body" body=""
	I1002 20:18:03.005005   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:03.005356   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:03.504932   32280 type.go:168] "Request Body" body=""
	I1002 20:18:03.505015   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:03.505307   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:04.004878   32280 type.go:168] "Request Body" body=""
	I1002 20:18:04.004960   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:04.005291   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:04.505045   32280 type.go:168] "Request Body" body=""
	I1002 20:18:04.505131   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:04.505465   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:04.505520   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:05.005008   32280 type.go:168] "Request Body" body=""
	I1002 20:18:05.005088   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:05.005422   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:05.504977   32280 type.go:168] "Request Body" body=""
	I1002 20:18:05.505046   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:05.505355   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:06.004890   32280 type.go:168] "Request Body" body=""
	I1002 20:18:06.004955   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:06.005271   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:06.505878   32280 type.go:168] "Request Body" body=""
	I1002 20:18:06.505942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:06.506244   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:06.506297   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:07.005866   32280 type.go:168] "Request Body" body=""
	I1002 20:18:07.005943   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:07.006253   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:07.504887   32280 type.go:168] "Request Body" body=""
	I1002 20:18:07.504964   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:07.505251   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:08.004916   32280 type.go:168] "Request Body" body=""
	I1002 20:18:08.004981   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:08.005306   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:08.504856   32280 type.go:168] "Request Body" body=""
	I1002 20:18:08.504941   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:08.505239   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:09.005880   32280 type.go:168] "Request Body" body=""
	I1002 20:18:09.005952   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:09.006285   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:09.006339   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:09.505080   32280 type.go:168] "Request Body" body=""
	I1002 20:18:09.505146   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:09.505447   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:10.005082   32280 type.go:168] "Request Body" body=""
	I1002 20:18:10.005147   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:10.005473   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:10.505242   32280 type.go:168] "Request Body" body=""
	I1002 20:18:10.505307   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:10.505606   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:11.005169   32280 type.go:168] "Request Body" body=""
	I1002 20:18:11.005243   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:11.005570   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:11.505121   32280 type.go:168] "Request Body" body=""
	I1002 20:18:11.505186   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:11.505487   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:11.505538   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:12.005071   32280 type.go:168] "Request Body" body=""
	I1002 20:18:12.005141   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:12.005461   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:12.505817   32280 type.go:168] "Request Body" body=""
	I1002 20:18:12.505883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:12.506177   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:13.005815   32280 type.go:168] "Request Body" body=""
	I1002 20:18:13.005887   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:13.006211   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:13.505873   32280 type.go:168] "Request Body" body=""
	I1002 20:18:13.505942   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:13.506236   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:13.506287   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:14.004813   32280 type.go:168] "Request Body" body=""
	I1002 20:18:14.004883   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:14.005208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:14.505838   32280 type.go:168] "Request Body" body=""
	I1002 20:18:14.505912   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:14.506225   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:15.005871   32280 type.go:168] "Request Body" body=""
	I1002 20:18:15.005949   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:15.006278   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:15.504830   32280 type.go:168] "Request Body" body=""
	I1002 20:18:15.504900   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:15.505190   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:16.004845   32280 type.go:168] "Request Body" body=""
	I1002 20:18:16.004935   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:16.005267   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:16.005321   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:16.504844   32280 type.go:168] "Request Body" body=""
	I1002 20:18:16.504915   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:16.505208   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:17.004848   32280 type.go:168] "Request Body" body=""
	I1002 20:18:17.005199   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:17.005523   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:17.505033   32280 type.go:168] "Request Body" body=""
	I1002 20:18:17.505107   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:17.505434   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:18.004982   32280 type.go:168] "Request Body" body=""
	I1002 20:18:18.005069   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:18.005443   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:18.005498   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:18.505161   32280 type.go:168] "Request Body" body=""
	I1002 20:18:18.505228   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:18.505530   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:19.005238   32280 type.go:168] "Request Body" body=""
	I1002 20:18:19.005302   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:19.005626   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:19.505401   32280 type.go:168] "Request Body" body=""
	I1002 20:18:19.505466   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:19.505798   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:18:20.005591   32280 type.go:168] "Request Body" body=""
	I1002 20:18:20.005673   32280 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753218" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:18:20.006000   32280 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:18:20.006051   32280 node_ready.go:55] error getting node "functional-753218" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753218": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:18:20.505823   32280 type.go:168] "Request Body" body=""
	I1002 20:18:20.505886   32280 node_ready.go:38] duration metric: took 6m0.001160736s for node "functional-753218" to be "Ready" ...
	I1002 20:18:20.508034   32280 out.go:203] 
	W1002 20:18:20.509328   32280 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:18:20.509341   32280 out.go:285] * 
	W1002 20:18:20.511008   32280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:18:20.512144   32280 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.349389136Z" level=info msg="createCtr: removing container d8a2e4886e59a5763e357c59eb0ae7ac013d8ca2bfe6e431c5c1f6bc3ee79896" id=cb98d186-791f-4fe9-8927-8ea0b105f661 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.349425042Z" level=info msg="createCtr: deleting container d8a2e4886e59a5763e357c59eb0ae7ac013d8ca2bfe6e431c5c1f6bc3ee79896 from storage" id=cb98d186-791f-4fe9-8927-8ea0b105f661 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.352229148Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753218_kube-system_91f2b96cb3e3f380ded17f30e8d873bd_0" id=18325a5c-d189-43e2-a8f5-039b6780aeb2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.352676709Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_8f4d4ea1035e2535a9c472062bfdd7f7_0" id=cb98d186-791f-4fe9-8927-8ea0b105f661 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.579454847Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=a3e2887a-6b09-4204-8e87-28529019cb15 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.57958345Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=a3e2887a-6b09-4204-8e87-28529019cb15 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.579624227Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a3e2887a-6b09-4204-8e87-28529019cb15 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.602981263Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=93dfff73-bd26-4d79-8160-b58f90868992 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.603116325Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=93dfff73-bd26-4d79-8160-b58f90868992 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.603199314Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=93dfff73-bd26-4d79-8160-b58f90868992 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.627047166Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=603ae9c4-b014-4e6b-9625-95255fb541a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.627191214Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=603ae9c4-b014-4e6b-9625-95255fb541a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:29 functional-753218 crio[2940]: time="2025-10-02T20:18:29.627226617Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=603ae9c4-b014-4e6b-9625-95255fb541a2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.059447852Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=868c30ac-55cd-4028-b41d-22cc3439b9eb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.313718275Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=09cf8f43-c411-4182-bf84-97ffb0d81e59 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.314581091Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=849bac91-7cc7-4cf5-bf85-a2b1e1b06303 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.315468648Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-753218/kube-scheduler" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.31575443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.319060123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.319462218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.342136538Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.343860435Z" level=info msg="createCtr: deleting container ID 262457ea237ebad471b5ae976b91bbbdd55dcd0de930648457c28448315cf7af from idIndex" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.3439106Z" level=info msg="createCtr: removing container 262457ea237ebad471b5ae976b91bbbdd55dcd0de930648457c28448315cf7af" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.343967174Z" level=info msg="createCtr: deleting container 262457ea237ebad471b5ae976b91bbbdd55dcd0de930648457c28448315cf7af from storage" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:18:30 functional-753218 crio[2940]: time="2025-10-02T20:18:30.346765146Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753218_kube-system_b25a71e49a335bbe853872de1b1e3093_0" id=6d300eef-d6ea-42e3-b80b-3cbd04dc0c1f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:18:33.403478    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:33.404018    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:33.405524    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:33.405886    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:18:33.407333    5438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:18:33 up  1:01,  0 user,  load average: 0.38, 0.13, 0.09
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.313003    1799 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.352456    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:18:29 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:29 functional-753218 kubelet[1799]:  > podSandboxID="65675f5fefd97e29be9e11728def45d5a2c472bac18f3ca682b57fda50e5abf7"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.352552    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:29 functional-753218 kubelet[1799]:         container etcd start failed in pod etcd-functional-753218_kube-system(91f2b96cb3e3f380ded17f30e8d873bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:29 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.352592    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753218" podUID="91f2b96cb3e3f380ded17f30e8d873bd"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.352911    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:18:29 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:29 functional-753218 kubelet[1799]:  > podSandboxID="055d32a868ccc672da5251b2017711a92949e7226757dee30bfd43e3d0b93077"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.353003    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:29 functional-753218 kubelet[1799]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(8f4d4ea1035e2535a9c472062bfdd7f7): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:29 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:29 functional-753218 kubelet[1799]: E1002 20:18:29.354056    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="8f4d4ea1035e2535a9c472062bfdd7f7"
	Oct 02 20:18:30 functional-753218 kubelet[1799]: E1002 20:18:30.313205    1799 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:18:30 functional-753218 kubelet[1799]: E1002 20:18:30.347061    1799 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:18:30 functional-753218 kubelet[1799]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:30 functional-753218 kubelet[1799]:  > podSandboxID="de1cc60186f989d4e0a8994c95a3f2e5173970c97e595ad7db2d469e1551df14"
	Oct 02 20:18:30 functional-753218 kubelet[1799]: E1002 20:18:30.347182    1799 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:18:30 functional-753218 kubelet[1799]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:18:30 functional-753218 kubelet[1799]:  > logger="UnhandledError"
	Oct 02 20:18:30 functional-753218 kubelet[1799]: E1002 20:18:30.347221    1799 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	Oct 02 20:18:32 functional-753218 kubelet[1799]: E1002 20:18:32.353321    1799 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753218\" not found"
	Oct 02 20:18:32 functional-753218 kubelet[1799]: E1002 20:18:32.816821    1799 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-753218.186ac570b511e75f\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac570b511e75f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node functional-753218 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:08:12.306458463 +0000 UTC m=+0.389053367,LastTimestamp:2025-10-02 20:08:12.307668719 +0000 UTC m=+0.390263643,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIn
stance:functional-753218,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (290.058847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.00s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (733.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753218 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-753218 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (12m11.912009445s)

                                                
                                                
-- stdout --
	* [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000852315s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000487211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000480662s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000745923s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-753218 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 12m11.91330209s for "functional-753218" cluster.
I1002 20:30:46.079669   12851 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (279.53699ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ delete  │ -p nospam-547008                                                                                              │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ start   │ -p functional-753218 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ -p functional-753218 --alsologtostderr -v=8                                                                   │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │                     │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:3.1                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:3.3                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:latest                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add minikube-local-cache-test:functional-753218                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache delete minikube-local-cache-test:functional-753218                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl images                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ cache   │ functional-753218 cache reload                                                                                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ kubectl │ functional-753218 kubectl -- --context functional-753218 get pods                                             │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ start   │ -p functional-753218 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:34.206207   39074 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:34.206493   39074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:34.206497   39074 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:34.206500   39074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:34.206690   39074 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:18:34.207119   39074 out.go:368] Setting JSON to false
	I1002 20:18:34.208025   39074 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3663,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:18:34.208099   39074 start.go:140] virtualization: kvm guest
	I1002 20:18:34.211076   39074 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:18:34.212342   39074 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:18:34.212345   39074 notify.go:221] Checking for updates...
	I1002 20:18:34.213685   39074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:34.214912   39074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:18:34.216075   39074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:18:34.217175   39074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:18:34.218365   39074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:18:34.219862   39074 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:18:34.219970   39074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:18:34.243293   39074 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:18:34.243370   39074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:34.294846   39074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:18:34.285071909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:18:34.294933   39074 docker.go:319] overlay module found
	I1002 20:18:34.296853   39074 out.go:179] * Using the docker driver based on existing profile
	I1002 20:18:34.297994   39074 start.go:306] selected driver: docker
	I1002 20:18:34.298010   39074 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:34.298070   39074 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:18:34.298154   39074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:34.347576   39074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:18:34.338434102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:18:34.348199   39074 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:18:34.348218   39074 cni.go:84] Creating CNI manager for ""
	I1002 20:18:34.348268   39074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:34.348308   39074 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:34.350240   39074 out.go:179] * Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	I1002 20:18:34.351573   39074 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:18:34.353042   39074 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:18:34.354380   39074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:34.354407   39074 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:18:34.354414   39074 cache.go:59] Caching tarball of preloaded images
	I1002 20:18:34.354480   39074 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:18:34.354514   39074 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:18:34.354521   39074 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:18:34.354600   39074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:18:34.373723   39074 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:18:34.373737   39074 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:18:34.373750   39074 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:18:34.373779   39074 start.go:361] acquireMachinesLock for functional-753218: {Name:mk742badf6f1dbafca8397e398143e758831ae3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:18:34.373825   39074 start.go:365] duration metric: took 33.687µs to acquireMachinesLock for "functional-753218"
	I1002 20:18:34.373838   39074 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:18:34.373845   39074 fix.go:55] fixHost starting: 
	I1002 20:18:34.374037   39074 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:18:34.391194   39074 fix.go:113] recreateIfNeeded on functional-753218: state=Running err=<nil>
	W1002 20:18:34.391212   39074 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:18:34.393102   39074 out.go:252] * Updating the running docker "functional-753218" container ...
	I1002 20:18:34.393135   39074 machine.go:93] provisionDockerMachine start ...
	I1002 20:18:34.393196   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.410850   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.411066   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.411072   39074 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:18:34.552329   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:18:34.552359   39074 ubuntu.go:182] provisioning hostname "functional-753218"
	I1002 20:18:34.552416   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.570052   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.570307   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.570319   39074 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753218 && echo "functional-753218" | sudo tee /etc/hostname
	I1002 20:18:34.721441   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:18:34.721512   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.738897   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.739113   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.739125   39074 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753218/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:18:34.881059   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:18:34.881084   39074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:18:34.881113   39074 ubuntu.go:190] setting up certificates
	I1002 20:18:34.881121   39074 provision.go:84] configureAuth start
	I1002 20:18:34.881164   39074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:18:34.899501   39074 provision.go:143] copyHostCerts
	I1002 20:18:34.899560   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:18:34.899574   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:18:34.899678   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:18:34.899811   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:18:34.899820   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:18:34.899861   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:18:34.899952   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:18:34.899957   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:18:34.899992   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:18:34.900070   39074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.functional-753218 san=[127.0.0.1 192.168.49.2 functional-753218 localhost minikube]
	I1002 20:18:35.209717   39074 provision.go:177] copyRemoteCerts
	I1002 20:18:35.209761   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:18:35.209800   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.226488   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.326447   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:18:35.342793   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:18:35.359162   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:18:35.375197   39074 provision.go:87] duration metric: took 494.066038ms to configureAuth
	I1002 20:18:35.375214   39074 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:18:35.375353   39074 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:18:35.375460   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.392271   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:35.392535   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:35.392555   39074 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:18:35.662001   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:18:35.662017   39074 machine.go:96] duration metric: took 1.268875772s to provisionDockerMachine
	I1002 20:18:35.662029   39074 start.go:294] postStartSetup for "functional-753218" (driver="docker")
	I1002 20:18:35.662042   39074 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:18:35.662106   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:18:35.662147   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.679558   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.779752   39074 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:18:35.783115   39074 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:18:35.783131   39074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:18:35.783153   39074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:18:35.783280   39074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:18:35.783385   39074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:18:35.783488   39074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> hosts in /etc/test/nested/copy/12851
	I1002 20:18:35.783529   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12851
	I1002 20:18:35.791362   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:18:35.807703   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts --> /etc/test/nested/copy/12851/hosts (40 bytes)
	I1002 20:18:35.824578   39074 start.go:297] duration metric: took 162.536937ms for postStartSetup
	I1002 20:18:35.824707   39074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:18:35.824741   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.842117   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.939428   39074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:18:35.943787   39074 fix.go:57] duration metric: took 1.569934708s for fixHost
	I1002 20:18:35.943804   39074 start.go:84] releasing machines lock for "functional-753218", held for 1.569972452s
	I1002 20:18:35.943864   39074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:18:35.960772   39074 ssh_runner.go:195] Run: cat /version.json
	I1002 20:18:35.960815   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.960859   39074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:18:35.960900   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.978069   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.978425   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:36.126122   39074 ssh_runner.go:195] Run: systemctl --version
	I1002 20:18:36.132369   39074 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:18:36.165368   39074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:18:36.169751   39074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:18:36.169819   39074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:18:36.177394   39074 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:18:36.177405   39074 start.go:496] detecting cgroup driver to use...
	I1002 20:18:36.177434   39074 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:18:36.177487   39074 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:18:36.191941   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:18:36.203333   39074 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:18:36.203390   39074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:18:36.216968   39074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:18:36.228214   39074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:18:36.308949   39074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:18:36.392928   39074 docker.go:234] disabling docker service ...
	I1002 20:18:36.392976   39074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:18:36.406808   39074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:18:36.418402   39074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:18:36.501067   39074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:18:36.583824   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:18:36.595669   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:18:36.609110   39074 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:18:36.609154   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.617194   39074 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:18:36.617240   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.625324   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.633155   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.641048   39074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:18:36.648837   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.656786   39074 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.664478   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.672362   39074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:18:36.678936   39074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:18:36.685474   39074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:18:36.766185   39074 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:18:36.872474   39074 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:18:36.872521   39074 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:18:36.876161   39074 start.go:564] Will wait 60s for crictl version
	I1002 20:18:36.876199   39074 ssh_runner.go:195] Run: which crictl
	I1002 20:18:36.879320   39074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:18:36.901521   39074 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:18:36.901576   39074 ssh_runner.go:195] Run: crio --version
	I1002 20:18:36.927454   39074 ssh_runner.go:195] Run: crio --version
	I1002 20:18:36.955669   39074 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:18:36.956820   39074 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:18:36.973453   39074 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:18:36.979247   39074 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:18:36.980537   39074 kubeadm.go:883] updating cluster {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:18:36.980633   39074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:36.980707   39074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:18:37.012555   39074 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:18:37.012566   39074 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:18:37.012602   39074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:18:37.037114   39074 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:18:37.037125   39074 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:18:37.037130   39074 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:18:37.037235   39074 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:18:37.037301   39074 ssh_runner.go:195] Run: crio config
	I1002 20:18:37.080633   39074 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:18:37.080675   39074 cni.go:84] Creating CNI manager for ""
	I1002 20:18:37.080685   39074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:37.080697   39074 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:18:37.080715   39074 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753218 NodeName:functional-753218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:18:37.080819   39074 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:18:37.080866   39074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:18:37.088458   39074 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:18:37.088499   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:18:37.095835   39074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:18:37.107722   39074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:18:37.119278   39074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 20:18:37.130821   39074 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:18:37.134590   39074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:18:37.217285   39074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:18:37.229402   39074 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218 for IP: 192.168.49.2
	I1002 20:18:37.229423   39074 certs.go:195] generating shared ca certs ...
	I1002 20:18:37.229445   39074 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:18:37.229580   39074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:18:37.229612   39074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:18:37.229635   39074 certs.go:257] generating profile certs ...
	I1002 20:18:37.229744   39074 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key
	I1002 20:18:37.229781   39074 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804
	I1002 20:18:37.229820   39074 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key
	I1002 20:18:37.229920   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:18:37.229944   39074 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:18:37.229949   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:18:37.229969   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:18:37.229988   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:18:37.230004   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:18:37.230036   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:18:37.230546   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:18:37.247164   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:18:37.262985   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:18:37.279026   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:18:37.294907   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:18:37.311017   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:18:37.326759   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:18:37.342531   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:18:37.358985   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:18:37.375049   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:18:37.390853   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:18:37.406776   39074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:18:37.418137   39074 ssh_runner.go:195] Run: openssl version
	I1002 20:18:37.423758   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:18:37.431400   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.434759   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.434796   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.469193   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:18:37.476976   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:18:37.484860   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.488438   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.488489   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.521688   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:18:37.529613   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:18:37.537558   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.541046   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.541078   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.574961   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:18:37.582802   39074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:18:37.586377   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:18:37.620185   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:18:37.653623   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:18:37.686983   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:18:37.720317   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:18:37.753617   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:18:37.787371   39074 kubeadm.go:400] StartCluster: {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:37.787431   39074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:18:37.787474   39074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:18:37.813804   39074 cri.go:89] found id: ""
	I1002 20:18:37.813849   39074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:18:37.821398   39074 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:18:37.821423   39074 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:18:37.821468   39074 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:18:37.828438   39074 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.828913   39074 kubeconfig.go:125] found "functional-753218" server: "https://192.168.49.2:8441"
	I1002 20:18:37.830019   39074 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:18:37.837252   39074 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:04:06.241851372 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:18:37.128983250 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:18:37.837272   39074 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:18:37.837284   39074 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:18:37.837326   39074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:18:37.863302   39074 cri.go:89] found id: ""
	I1002 20:18:37.863361   39074 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:18:37.911147   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:18:37.918894   39074 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  2 20:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 20:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:08 /etc/kubernetes/scheduler.conf
	
	I1002 20:18:37.918950   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:18:37.926065   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:18:37.933031   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.933065   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:18:37.939972   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:18:37.946875   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.946911   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:18:37.953620   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:18:37.960544   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.960573   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:18:37.967317   39074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:18:37.974311   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:38.013321   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.074022   39074 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060677583s)
	I1002 20:18:39.074075   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.228791   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.281116   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.328956   39074 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:18:39.329020   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:39.829304   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:40.329782   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:40.830022   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:41.329834   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:41.829218   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:42.329847   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:42.829333   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:43.329809   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:43.829522   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:44.329493   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:44.829576   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:45.329166   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:45.829738   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:46.329491   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:46.829212   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:47.330127   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:47.829175   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:48.329888   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:48.829745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:49.330019   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:49.829990   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:50.330054   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:50.829373   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:51.330102   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:51.829486   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:52.329898   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:52.829160   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:53.329735   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:53.829783   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:54.329822   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:54.829468   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:55.329274   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:55.829515   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:56.329151   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:56.829940   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:57.329721   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:57.829433   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:58.329165   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:58.829113   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:59.329101   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:59.829897   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:00.329742   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:00.829770   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:01.329988   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:01.830082   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:02.329237   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:02.829922   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:03.330132   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:03.829921   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:04.329162   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:04.829124   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:05.329748   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:05.829595   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:06.329426   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:06.829387   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:07.329567   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:07.830080   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:08.329899   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:08.829745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:09.329666   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:09.829758   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:10.329818   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:10.829090   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:11.329880   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:11.829546   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:12.329286   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:12.830050   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:13.329756   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:13.829521   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:14.329346   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:14.829881   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:15.329641   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:15.829463   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:16.329288   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:16.829123   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:17.329880   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:17.829643   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:18.329839   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:18.829576   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:19.329600   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:19.829397   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:20.329443   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:20.829214   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:21.329827   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:21.829216   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:22.329884   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:22.829410   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:23.329734   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:23.829124   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:24.330092   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:24.829862   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:25.329373   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:25.829486   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:26.329987   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:26.829953   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:27.330064   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:27.829775   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:28.329834   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:28.829394   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:29.329185   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:29.829478   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:30.329460   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:30.829312   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:31.330076   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:31.829866   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:32.329434   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:32.829588   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:33.329475   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:33.829203   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:34.329105   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:34.829918   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:35.329741   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:35.829625   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:36.329350   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:36.829147   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:37.329144   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:37.829141   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:38.329884   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:38.829677   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:39.329725   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:39.329777   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:39.355028   39074 cri.go:89] found id: ""
	I1002 20:19:39.355041   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.355048   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:39.355053   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:39.355092   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:39.380001   39074 cri.go:89] found id: ""
	I1002 20:19:39.380017   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.380026   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:39.380031   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:39.380090   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:39.405251   39074 cri.go:89] found id: ""
	I1002 20:19:39.405267   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.405273   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:39.405277   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:39.405321   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:39.430719   39074 cri.go:89] found id: ""
	I1002 20:19:39.430732   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.430739   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:39.430745   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:39.430794   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:39.454916   39074 cri.go:89] found id: ""
	I1002 20:19:39.454929   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.454936   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:39.454940   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:39.454981   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:39.478922   39074 cri.go:89] found id: ""
	I1002 20:19:39.478934   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.478940   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:39.478944   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:39.478983   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:39.503714   39074 cri.go:89] found id: ""
	I1002 20:19:39.503731   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.503739   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:39.503749   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:39.503760   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:39.573887   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:39.573907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:39.585174   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:39.585191   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:39.639301   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:39.632705    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.633125    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.634665    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.635127    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.636638    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:39.632705    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.633125    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.634665    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.635127    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.636638    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:39.639313   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:39.639322   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:39.699438   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:39.699455   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:42.228926   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:42.239185   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:42.239234   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:42.263214   39074 cri.go:89] found id: ""
	I1002 20:19:42.263230   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.263238   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:42.263245   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:42.263288   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:42.286996   39074 cri.go:89] found id: ""
	I1002 20:19:42.287009   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.287014   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:42.287019   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:42.287059   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:42.311539   39074 cri.go:89] found id: ""
	I1002 20:19:42.311555   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.311563   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:42.311568   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:42.311608   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:42.335720   39074 cri.go:89] found id: ""
	I1002 20:19:42.335735   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.335740   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:42.335744   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:42.335789   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:42.359620   39074 cri.go:89] found id: ""
	I1002 20:19:42.359635   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.359642   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:42.359658   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:42.359717   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:42.383670   39074 cri.go:89] found id: ""
	I1002 20:19:42.383684   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.383702   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:42.383708   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:42.383752   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:42.409324   39074 cri.go:89] found id: ""
	I1002 20:19:42.409337   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.409343   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:42.409350   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:42.409358   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:42.463480   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:42.456002    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.456468    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.458629    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.459138    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.460809    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:42.456002    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.456468    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.458629    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.459138    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.460809    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:42.463498   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:42.463508   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:42.522978   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:42.522994   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:42.550529   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:42.550544   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:42.618426   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:42.618446   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:45.130475   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:45.140935   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:45.140984   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:45.166296   39074 cri.go:89] found id: ""
	I1002 20:19:45.166307   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.166313   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:45.166318   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:45.166370   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:45.190669   39074 cri.go:89] found id: ""
	I1002 20:19:45.190684   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.190690   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:45.190694   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:45.190748   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:45.215836   39074 cri.go:89] found id: ""
	I1002 20:19:45.215861   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.215866   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:45.215870   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:45.215911   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:45.240020   39074 cri.go:89] found id: ""
	I1002 20:19:45.240032   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.240037   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:45.240054   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:45.240103   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:45.265411   39074 cri.go:89] found id: ""
	I1002 20:19:45.265424   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.265430   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:45.265434   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:45.265482   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:45.289247   39074 cri.go:89] found id: ""
	I1002 20:19:45.289262   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.289272   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:45.289277   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:45.289327   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:45.313127   39074 cri.go:89] found id: ""
	I1002 20:19:45.313142   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.313149   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:45.313157   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:45.313175   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:45.383170   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:45.383189   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:45.394492   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:45.394506   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:45.448758   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:45.441841    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.442413    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.443998    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.444386    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.445933    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:45.441841    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.442413    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.443998    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.444386    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.445933    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:45.448771   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:45.448780   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:45.512497   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:45.512515   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:48.041482   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:48.051591   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:48.051635   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:48.076424   39074 cri.go:89] found id: ""
	I1002 20:19:48.076441   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.076449   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:48.076454   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:48.076499   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:48.100297   39074 cri.go:89] found id: ""
	I1002 20:19:48.100324   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.100330   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:48.100334   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:48.100378   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:48.124828   39074 cri.go:89] found id: ""
	I1002 20:19:48.124845   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.124854   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:48.124860   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:48.124916   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:48.148977   39074 cri.go:89] found id: ""
	I1002 20:19:48.148991   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.148998   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:48.149002   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:48.149045   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:48.172962   39074 cri.go:89] found id: ""
	I1002 20:19:48.172978   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.172987   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:48.172992   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:48.173078   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:48.196028   39074 cri.go:89] found id: ""
	I1002 20:19:48.196047   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.196056   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:48.196063   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:48.196116   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:48.219489   39074 cri.go:89] found id: ""
	I1002 20:19:48.219506   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.219514   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:48.219524   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:48.219535   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:48.285750   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:48.285767   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:48.296759   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:48.296773   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:48.350552   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:48.343634    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.344266    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.345849    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.346274    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.347827    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:48.343634    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.344266    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.345849    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.346274    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.347827    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:48.350562   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:48.350570   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:48.415152   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:48.415174   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:50.944831   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:50.955007   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:50.955051   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:50.979562   39074 cri.go:89] found id: ""
	I1002 20:19:50.979574   39074 logs.go:282] 0 containers: []
	W1002 20:19:50.979580   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:50.979586   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:50.979626   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:51.005726   39074 cri.go:89] found id: ""
	I1002 20:19:51.005738   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.005744   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:51.005748   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:51.005789   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:51.029734   39074 cri.go:89] found id: ""
	I1002 20:19:51.029751   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.029760   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:51.029766   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:51.029810   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:51.053889   39074 cri.go:89] found id: ""
	I1002 20:19:51.053904   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.053912   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:51.053918   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:51.053970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:51.080377   39074 cri.go:89] found id: ""
	I1002 20:19:51.080389   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.080394   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:51.080399   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:51.080438   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:51.105307   39074 cri.go:89] found id: ""
	I1002 20:19:51.105321   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.105326   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:51.105331   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:51.105371   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:51.130666   39074 cri.go:89] found id: ""
	I1002 20:19:51.130682   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.130689   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:51.130700   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:51.130710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:51.141518   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:51.141533   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:51.194182   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:51.187772    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.188306    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.189890    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.190325    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.191812    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:51.187772    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.188306    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.189890    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.190325    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.191812    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:51.194195   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:51.194204   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:51.253875   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:51.253894   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:51.281673   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:51.281693   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:53.847012   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:53.857350   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:53.857394   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:53.882278   39074 cri.go:89] found id: ""
	I1002 20:19:53.882291   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.882297   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:53.882309   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:53.882351   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:53.906222   39074 cri.go:89] found id: ""
	I1002 20:19:53.906235   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.906241   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:53.906245   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:53.906294   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:53.930975   39074 cri.go:89] found id: ""
	I1002 20:19:53.930988   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.930995   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:53.930999   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:53.931045   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:53.957875   39074 cri.go:89] found id: ""
	I1002 20:19:53.957891   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.957901   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:53.957907   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:53.958019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:53.982116   39074 cri.go:89] found id: ""
	I1002 20:19:53.982129   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.982135   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:53.982140   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:53.982181   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:54.006296   39074 cri.go:89] found id: ""
	I1002 20:19:54.006310   39074 logs.go:282] 0 containers: []
	W1002 20:19:54.006316   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:54.006320   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:54.006360   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:54.031088   39074 cri.go:89] found id: ""
	I1002 20:19:54.031102   39074 logs.go:282] 0 containers: []
	W1002 20:19:54.031108   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:54.031116   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:54.031125   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:54.041909   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:54.041951   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:54.095399   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:54.088843    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.089263    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.090810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.091232    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.092782    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:54.088843    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.089263    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.090810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.091232    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.092782    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:54.095411   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:54.095438   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:54.159991   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:54.160010   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:54.187642   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:54.187676   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:56.757287   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:56.768252   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:56.768293   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:56.793773   39074 cri.go:89] found id: ""
	I1002 20:19:56.793785   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.793791   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:56.793796   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:56.793841   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:56.819484   39074 cri.go:89] found id: ""
	I1002 20:19:56.819499   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.819509   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:56.819516   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:56.819558   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:56.844773   39074 cri.go:89] found id: ""
	I1002 20:19:56.844787   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.844793   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:56.844798   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:56.844838   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:56.869847   39074 cri.go:89] found id: ""
	I1002 20:19:56.869888   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.869898   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:56.869906   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:56.869956   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:56.894519   39074 cri.go:89] found id: ""
	I1002 20:19:56.894537   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.894545   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:56.894553   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:56.894613   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:56.920670   39074 cri.go:89] found id: ""
	I1002 20:19:56.920689   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.920698   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:56.920706   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:56.920758   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:56.945515   39074 cri.go:89] found id: ""
	I1002 20:19:56.945529   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.945535   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:56.945543   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:56.945557   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:57.001311   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:56.994723    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.995244    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.996779    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.997235    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.998722    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:56.994723    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.995244    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.996779    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.997235    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.998722    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:57.001323   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:57.001332   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:57.065838   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:57.065856   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:57.093387   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:57.093401   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:57.161709   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:57.161730   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:59.673972   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:59.684279   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:59.684321   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:59.708892   39074 cri.go:89] found id: ""
	I1002 20:19:59.708905   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.708911   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:59.708915   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:59.708958   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:59.733806   39074 cri.go:89] found id: ""
	I1002 20:19:59.733821   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.733828   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:59.733834   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:59.733886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:59.758895   39074 cri.go:89] found id: ""
	I1002 20:19:59.758907   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.758913   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:59.758918   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:59.758970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:59.782140   39074 cri.go:89] found id: ""
	I1002 20:19:59.782154   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.782161   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:59.782166   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:59.782211   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:59.806783   39074 cri.go:89] found id: ""
	I1002 20:19:59.806797   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.806803   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:59.806808   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:59.806851   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:59.831636   39074 cri.go:89] found id: ""
	I1002 20:19:59.831663   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.831673   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:59.831679   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:59.831725   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:59.855094   39074 cri.go:89] found id: ""
	I1002 20:19:59.855110   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.855119   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:59.855128   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:59.855139   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:59.916579   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:59.916598   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:59.944216   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:59.944230   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:00.010694   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:00.010712   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:00.021993   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:00.022008   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:00.076257   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:00.069139    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.069711    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071246    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071701    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.073412    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:00.069139    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.069711    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071246    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071701    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.073412    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:02.577956   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:02.588476   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:02.588521   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:02.612197   39074 cri.go:89] found id: ""
	I1002 20:20:02.612213   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.612224   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:02.612231   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:02.612283   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:02.636711   39074 cri.go:89] found id: ""
	I1002 20:20:02.636727   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.636737   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:02.636743   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:02.636797   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:02.660364   39074 cri.go:89] found id: ""
	I1002 20:20:02.660380   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.660389   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:02.660396   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:02.660448   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:02.684665   39074 cri.go:89] found id: ""
	I1002 20:20:02.684682   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.684689   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:02.684694   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:02.684739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:02.710226   39074 cri.go:89] found id: ""
	I1002 20:20:02.710239   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.710247   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:02.710254   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:02.710308   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:02.735247   39074 cri.go:89] found id: ""
	I1002 20:20:02.735262   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.735271   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:02.735278   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:02.735328   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:02.760072   39074 cri.go:89] found id: ""
	I1002 20:20:02.760085   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.760091   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:02.760098   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:02.760106   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:02.824182   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:02.824200   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:02.835284   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:02.835297   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:02.888320   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:02.881490    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.881999    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883536    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883961    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.885446    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:02.881490    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.881999    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883536    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883961    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.885446    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:02.888330   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:02.888339   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:02.952125   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:02.952145   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:05.481086   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:05.491660   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:05.491723   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:05.517036   39074 cri.go:89] found id: ""
	I1002 20:20:05.517052   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.517060   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:05.517067   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:05.517114   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:05.542299   39074 cri.go:89] found id: ""
	I1002 20:20:05.542312   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.542320   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:05.542326   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:05.542387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:05.567213   39074 cri.go:89] found id: ""
	I1002 20:20:05.567227   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.567233   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:05.567238   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:05.567286   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:05.590782   39074 cri.go:89] found id: ""
	I1002 20:20:05.590795   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.590801   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:05.590807   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:05.590850   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:05.615825   39074 cri.go:89] found id: ""
	I1002 20:20:05.615837   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.615843   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:05.615849   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:05.615886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:05.640124   39074 cri.go:89] found id: ""
	I1002 20:20:05.640137   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.640143   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:05.640148   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:05.640191   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:05.664435   39074 cri.go:89] found id: ""
	I1002 20:20:05.664451   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.664460   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:05.664469   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:05.664478   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:05.675270   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:05.675284   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:05.728958   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:05.722310    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.722829    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724378    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724835    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.726322    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:05.722310    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.722829    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724378    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724835    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.726322    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:05.728968   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:05.728977   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:05.789744   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:05.789763   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:05.816871   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:05.816886   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:08.386603   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:08.396838   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:08.396887   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:08.421504   39074 cri.go:89] found id: ""
	I1002 20:20:08.421516   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.421526   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:08.421531   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:08.421573   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:08.445525   39074 cri.go:89] found id: ""
	I1002 20:20:08.445539   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.445551   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:08.445557   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:08.445611   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:08.473912   39074 cri.go:89] found id: ""
	I1002 20:20:08.473926   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.473932   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:08.473937   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:08.473977   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:08.498551   39074 cri.go:89] found id: ""
	I1002 20:20:08.498567   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.498575   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:08.498579   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:08.498619   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:08.522969   39074 cri.go:89] found id: ""
	I1002 20:20:08.522985   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.522991   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:08.522996   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:08.523041   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:08.546557   39074 cri.go:89] found id: ""
	I1002 20:20:08.546572   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.546579   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:08.546583   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:08.546628   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:08.570570   39074 cri.go:89] found id: ""
	I1002 20:20:08.570586   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.570595   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:08.570605   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:08.570619   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:08.639672   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:08.639691   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:08.651327   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:08.651345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:08.704679   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:08.698150    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.698634    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700211    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700630    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.702066    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:08.698150    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.698634    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700211    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700630    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.702066    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:08.704698   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:08.704710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:08.767857   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:08.767876   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:11.297723   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:11.307921   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:11.307963   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:11.337544   39074 cri.go:89] found id: ""
	I1002 20:20:11.337560   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.337577   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:11.337584   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:11.337640   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:11.363291   39074 cri.go:89] found id: ""
	I1002 20:20:11.363306   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.363315   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:11.363325   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:11.363366   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:11.387886   39074 cri.go:89] found id: ""
	I1002 20:20:11.387905   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.387915   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:11.387922   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:11.387972   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:11.412550   39074 cri.go:89] found id: ""
	I1002 20:20:11.412565   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.412573   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:11.412579   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:11.412677   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:11.437380   39074 cri.go:89] found id: ""
	I1002 20:20:11.437396   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.437405   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:11.437411   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:11.437452   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:11.461402   39074 cri.go:89] found id: ""
	I1002 20:20:11.461415   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.461421   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:11.461426   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:11.461471   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:11.486814   39074 cri.go:89] found id: ""
	I1002 20:20:11.486828   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.486833   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:11.486840   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:11.486848   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:11.497776   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:11.497791   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:11.552252   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:11.545574    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.546102    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.547700    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.548151    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.549707    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:11.545574    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.546102    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.547700    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.548151    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.549707    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:11.552263   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:11.552278   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:11.614501   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:11.614519   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:11.641975   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:11.641990   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:14.212363   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:14.223339   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:14.223387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:14.247765   39074 cri.go:89] found id: ""
	I1002 20:20:14.247782   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.247790   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:14.247796   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:14.247850   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:14.272207   39074 cri.go:89] found id: ""
	I1002 20:20:14.272223   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.272230   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:14.272235   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:14.272275   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:14.296884   39074 cri.go:89] found id: ""
	I1002 20:20:14.296896   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.296901   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:14.296906   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:14.296953   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:14.322400   39074 cri.go:89] found id: ""
	I1002 20:20:14.322416   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.322424   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:14.322430   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:14.322483   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:14.348457   39074 cri.go:89] found id: ""
	I1002 20:20:14.348474   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.348482   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:14.348488   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:14.348529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:14.371846   39074 cri.go:89] found id: ""
	I1002 20:20:14.371859   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.371866   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:14.371870   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:14.371910   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:14.396739   39074 cri.go:89] found id: ""
	I1002 20:20:14.396757   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.396765   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:14.396775   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:14.396785   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:14.461682   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:14.461703   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:14.473125   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:14.473138   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:14.527220   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:14.520100    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.520639    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522150    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522547    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.524758    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:14.520100    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.520639    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522150    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522547    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.524758    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:14.527230   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:14.527243   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:14.587080   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:14.587097   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:17.117171   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:17.127800   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:17.127860   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:17.153825   39074 cri.go:89] found id: ""
	I1002 20:20:17.153838   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.153845   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:17.153850   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:17.153890   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:17.179191   39074 cri.go:89] found id: ""
	I1002 20:20:17.179208   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.179218   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:17.179225   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:17.179283   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:17.203643   39074 cri.go:89] found id: ""
	I1002 20:20:17.203670   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.203677   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:17.203682   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:17.203729   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:17.228485   39074 cri.go:89] found id: ""
	I1002 20:20:17.228500   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.228509   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:17.228513   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:17.228552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:17.254499   39074 cri.go:89] found id: ""
	I1002 20:20:17.254513   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.254519   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:17.254524   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:17.254568   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:17.280943   39074 cri.go:89] found id: ""
	I1002 20:20:17.280959   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.280968   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:17.280975   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:17.281022   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:17.306591   39074 cri.go:89] found id: ""
	I1002 20:20:17.306607   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.306615   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:17.306624   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:17.306638   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:17.365595   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:17.358275    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359542    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359993    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.361559    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.362067    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:17.358275    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359542    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359993    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.361559    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.362067    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:17.365605   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:17.365615   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:17.428722   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:17.428741   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:17.456720   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:17.456736   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:17.526400   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:17.526419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:20.038675   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:20.049608   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:20.049670   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:20.075162   39074 cri.go:89] found id: ""
	I1002 20:20:20.075178   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.075193   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:20.075200   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:20.075244   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:20.100714   39074 cri.go:89] found id: ""
	I1002 20:20:20.100730   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.100739   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:20.100745   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:20.100796   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:20.125515   39074 cri.go:89] found id: ""
	I1002 20:20:20.125530   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.125536   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:20.125541   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:20.125590   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:20.150152   39074 cri.go:89] found id: ""
	I1002 20:20:20.150166   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.150172   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:20.150176   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:20.150219   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:20.174386   39074 cri.go:89] found id: ""
	I1002 20:20:20.174400   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.174405   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:20.174410   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:20.174451   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:20.198954   39074 cri.go:89] found id: ""
	I1002 20:20:20.198967   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.198974   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:20.198978   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:20.199019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:20.223494   39074 cri.go:89] found id: ""
	I1002 20:20:20.223506   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.223512   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:20.223520   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:20.223530   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:20.234227   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:20.234242   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:20.287508   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:20.281135    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.281556    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283225    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283624    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.285109    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:20.281135    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.281556    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283225    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283624    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.285109    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:20.287521   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:20.287530   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:20.353299   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:20.353316   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:20.381247   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:20.381264   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:22.948641   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:22.958867   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:22.958923   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:22.982867   39074 cri.go:89] found id: ""
	I1002 20:20:22.982888   39074 logs.go:282] 0 containers: []
	W1002 20:20:22.982896   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:22.982905   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:22.982963   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:23.008002   39074 cri.go:89] found id: ""
	I1002 20:20:23.008019   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.008025   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:23.008031   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:23.008102   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:23.032729   39074 cri.go:89] found id: ""
	I1002 20:20:23.032745   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.032755   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:23.032761   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:23.032805   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:23.057489   39074 cri.go:89] found id: ""
	I1002 20:20:23.057506   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.057513   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:23.057520   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:23.057574   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:23.082449   39074 cri.go:89] found id: ""
	I1002 20:20:23.082465   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.082473   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:23.082480   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:23.082533   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:23.106284   39074 cri.go:89] found id: ""
	I1002 20:20:23.106300   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.106308   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:23.106314   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:23.106356   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:23.131674   39074 cri.go:89] found id: ""
	I1002 20:20:23.131689   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.131698   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:23.131708   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:23.131719   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:23.202584   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:23.202606   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:23.213553   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:23.213567   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:23.267093   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:23.260296    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.260752    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262302    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262721    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.264215    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:23.260296    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.260752    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262302    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262721    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.264215    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:23.267107   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:23.267117   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:23.330039   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:23.330057   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:25.859757   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:25.870050   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:25.870094   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:25.893890   39074 cri.go:89] found id: ""
	I1002 20:20:25.893903   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.893909   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:25.893913   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:25.893962   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:25.918711   39074 cri.go:89] found id: ""
	I1002 20:20:25.918724   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.918731   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:25.918740   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:25.918790   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:25.943028   39074 cri.go:89] found id: ""
	I1002 20:20:25.943040   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.943046   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:25.943050   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:25.943100   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:25.968555   39074 cri.go:89] found id: ""
	I1002 20:20:25.968569   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.968575   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:25.968580   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:25.968630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:25.993321   39074 cri.go:89] found id: ""
	I1002 20:20:25.993334   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.993340   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:25.993344   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:25.993393   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:26.017729   39074 cri.go:89] found id: ""
	I1002 20:20:26.017755   39074 logs.go:282] 0 containers: []
	W1002 20:20:26.017761   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:26.017766   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:26.017807   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:26.042867   39074 cri.go:89] found id: ""
	I1002 20:20:26.042879   39074 logs.go:282] 0 containers: []
	W1002 20:20:26.042885   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:26.042892   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:26.042900   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:26.109498   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:26.109517   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:26.120700   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:26.120715   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:26.174158   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:26.167675    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.168158    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.169684    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.170006    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.171555    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:26.167675    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.168158    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.169684    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.170006    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.171555    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:26.174169   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:26.174177   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:26.232801   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:26.232820   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:28.760440   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:28.770974   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:28.771015   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:28.795071   39074 cri.go:89] found id: ""
	I1002 20:20:28.795084   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.795089   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:28.795094   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:28.795137   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:28.820101   39074 cri.go:89] found id: ""
	I1002 20:20:28.820114   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.820120   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:28.820125   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:28.820174   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:28.844954   39074 cri.go:89] found id: ""
	I1002 20:20:28.844967   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.844974   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:28.844978   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:28.845021   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:28.869971   39074 cri.go:89] found id: ""
	I1002 20:20:28.869984   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.869991   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:28.869996   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:28.870035   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:28.894419   39074 cri.go:89] found id: ""
	I1002 20:20:28.894434   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.894443   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:28.894454   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:28.894497   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:28.919785   39074 cri.go:89] found id: ""
	I1002 20:20:28.919798   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.919804   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:28.919808   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:28.919847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:28.945626   39074 cri.go:89] found id: ""
	I1002 20:20:28.945644   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.945666   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:28.945676   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:28.945688   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:29.013406   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:29.013424   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:29.024733   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:29.024749   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:29.079492   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:29.073004    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.073547    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075195    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075620    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.077061    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:29.073004    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.073547    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075195    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075620    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.077061    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:29.079501   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:29.079510   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:29.143375   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:29.143393   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:31.673342   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:31.683685   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:31.683744   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:31.708355   39074 cri.go:89] found id: ""
	I1002 20:20:31.708368   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.708374   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:31.708378   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:31.708426   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:31.732066   39074 cri.go:89] found id: ""
	I1002 20:20:31.732080   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.732085   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:31.732090   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:31.732128   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:31.756955   39074 cri.go:89] found id: ""
	I1002 20:20:31.756968   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.756975   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:31.756981   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:31.757031   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:31.783141   39074 cri.go:89] found id: ""
	I1002 20:20:31.783157   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.783163   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:31.783168   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:31.783209   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:31.807678   39074 cri.go:89] found id: ""
	I1002 20:20:31.807691   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.807698   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:31.807703   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:31.807745   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:31.831482   39074 cri.go:89] found id: ""
	I1002 20:20:31.831494   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.831500   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:31.831504   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:31.831548   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:31.855667   39074 cri.go:89] found id: ""
	I1002 20:20:31.855683   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.855692   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:31.855700   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:31.855710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:31.882380   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:31.882395   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:31.947814   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:31.947838   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:31.958919   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:31.958934   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:32.013721   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:32.006971    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.007473    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009037    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009432    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.010967    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:32.006971    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.007473    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009037    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009432    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.010967    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:32.013731   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:32.013742   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:34.575751   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:34.585980   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:34.586030   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:34.610997   39074 cri.go:89] found id: ""
	I1002 20:20:34.611013   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.611019   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:34.611024   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:34.611076   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:34.635375   39074 cri.go:89] found id: ""
	I1002 20:20:34.635388   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.635394   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:34.635401   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:34.635449   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:34.659513   39074 cri.go:89] found id: ""
	I1002 20:20:34.659526   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.659532   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:34.659536   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:34.659584   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:34.683614   39074 cri.go:89] found id: ""
	I1002 20:20:34.683628   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.683634   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:34.683638   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:34.683709   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:34.707536   39074 cri.go:89] found id: ""
	I1002 20:20:34.707548   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.707554   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:34.707558   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:34.707606   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:34.730813   39074 cri.go:89] found id: ""
	I1002 20:20:34.730829   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.730838   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:34.730844   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:34.730886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:34.756746   39074 cri.go:89] found id: ""
	I1002 20:20:34.756758   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.756763   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:34.756770   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:34.756779   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:34.823845   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:34.823864   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:34.834944   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:34.834959   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:34.889016   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:34.882235    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.882739    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884456    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884966    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.886550    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:34.882235    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.882739    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884456    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884966    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.886550    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:34.889027   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:34.889039   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:34.952102   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:34.952120   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:37.482142   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:37.492739   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:37.492783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:37.518265   39074 cri.go:89] found id: ""
	I1002 20:20:37.518279   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.518285   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:37.518290   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:37.518332   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:37.544309   39074 cri.go:89] found id: ""
	I1002 20:20:37.544322   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.544327   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:37.544332   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:37.544371   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:37.568928   39074 cri.go:89] found id: ""
	I1002 20:20:37.568947   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.568955   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:37.568960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:37.569000   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:37.593112   39074 cri.go:89] found id: ""
	I1002 20:20:37.593125   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.593131   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:37.593135   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:37.593175   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:37.617378   39074 cri.go:89] found id: ""
	I1002 20:20:37.617393   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.617399   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:37.617404   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:37.617446   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:37.641497   39074 cri.go:89] found id: ""
	I1002 20:20:37.641509   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.641514   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:37.641519   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:37.641560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:37.665025   39074 cri.go:89] found id: ""
	I1002 20:20:37.665037   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.665043   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:37.665050   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:37.665059   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:37.729867   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:37.729886   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:37.741144   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:37.741161   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:37.794545   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:37.788104    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.788618    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790124    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790578    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.791673    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:37.788104    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.788618    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790124    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790578    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.791673    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:37.794554   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:37.794563   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:37.858517   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:37.858537   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:40.387221   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:40.397406   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:40.397456   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:40.422226   39074 cri.go:89] found id: ""
	I1002 20:20:40.422241   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.422249   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:40.422256   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:40.422312   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:40.448898   39074 cri.go:89] found id: ""
	I1002 20:20:40.448914   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.448922   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:40.448928   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:40.448970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:40.473866   39074 cri.go:89] found id: ""
	I1002 20:20:40.473883   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.473891   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:40.473898   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:40.473940   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:40.499789   39074 cri.go:89] found id: ""
	I1002 20:20:40.499804   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.499820   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:40.499827   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:40.499870   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:40.524055   39074 cri.go:89] found id: ""
	I1002 20:20:40.524070   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.524078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:40.524084   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:40.524131   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:40.549681   39074 cri.go:89] found id: ""
	I1002 20:20:40.549697   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.549705   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:40.549709   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:40.549751   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:40.574534   39074 cri.go:89] found id: ""
	I1002 20:20:40.574551   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.574559   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:40.574568   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:40.574585   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:40.585332   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:40.585345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:40.639552   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:40.632883    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.633368    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.634904    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.635294    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.636825    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:40.632883    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.633368    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.634904    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.635294    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.636825    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:40.639561   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:40.639570   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:40.703074   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:40.703093   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:40.731458   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:40.731471   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:43.302779   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:43.313194   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:43.313249   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:43.340348   39074 cri.go:89] found id: ""
	I1002 20:20:43.340361   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.340367   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:43.340372   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:43.340416   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:43.365438   39074 cri.go:89] found id: ""
	I1002 20:20:43.365453   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.365461   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:43.365467   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:43.365530   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:43.392295   39074 cri.go:89] found id: ""
	I1002 20:20:43.392308   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.392314   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:43.392319   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:43.392358   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:43.417313   39074 cri.go:89] found id: ""
	I1002 20:20:43.417326   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.417332   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:43.417336   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:43.417381   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:43.441890   39074 cri.go:89] found id: ""
	I1002 20:20:43.441907   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.441913   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:43.441917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:43.441959   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:43.467410   39074 cri.go:89] found id: ""
	I1002 20:20:43.467427   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.467438   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:43.467444   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:43.467501   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:43.492142   39074 cri.go:89] found id: ""
	I1002 20:20:43.492154   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.492160   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:43.492168   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:43.492178   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:43.520876   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:43.520907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:43.586242   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:43.586258   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:43.597341   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:43.597355   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:43.651087   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:43.644558    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.645043    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.646588    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.647003    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.648460    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:43.644558    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.645043    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.646588    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.647003    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.648460    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:43.651098   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:43.651112   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:46.210362   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:46.220658   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:46.220710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:46.245577   39074 cri.go:89] found id: ""
	I1002 20:20:46.245591   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.245597   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:46.245601   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:46.245641   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:46.270950   39074 cri.go:89] found id: ""
	I1002 20:20:46.270965   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.270974   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:46.270979   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:46.271024   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:46.295887   39074 cri.go:89] found id: ""
	I1002 20:20:46.295903   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.295911   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:46.295917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:46.295969   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:46.321705   39074 cri.go:89] found id: ""
	I1002 20:20:46.321721   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.321730   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:46.321736   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:46.321785   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:46.348811   39074 cri.go:89] found id: ""
	I1002 20:20:46.348827   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.348836   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:46.348842   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:46.348900   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:46.373477   39074 cri.go:89] found id: ""
	I1002 20:20:46.373493   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.373502   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:46.373508   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:46.373552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:46.398884   39074 cri.go:89] found id: ""
	I1002 20:20:46.398900   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.398908   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:46.398917   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:46.398926   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:46.463113   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:46.463131   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:46.474566   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:46.474578   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:46.529468   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:46.522633    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.523203    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.524813    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.525199    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.526736    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:46.522633    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.523203    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.524813    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.525199    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.526736    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:46.529479   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:46.529489   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:46.590223   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:46.590241   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:49.118745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:49.128971   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:49.129012   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:49.155632   39074 cri.go:89] found id: ""
	I1002 20:20:49.155662   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.155683   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:49.155689   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:49.155734   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:49.180611   39074 cri.go:89] found id: ""
	I1002 20:20:49.180629   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.180635   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:49.180639   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:49.180703   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:49.206534   39074 cri.go:89] found id: ""
	I1002 20:20:49.206557   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.206563   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:49.206568   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:49.206617   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:49.231608   39074 cri.go:89] found id: ""
	I1002 20:20:49.231625   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.231633   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:49.231641   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:49.231713   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:49.256407   39074 cri.go:89] found id: ""
	I1002 20:20:49.256426   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.256433   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:49.256439   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:49.256490   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:49.281494   39074 cri.go:89] found id: ""
	I1002 20:20:49.281509   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.281517   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:49.281524   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:49.281571   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:49.306502   39074 cri.go:89] found id: ""
	I1002 20:20:49.306518   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.306526   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:49.306534   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:49.306543   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:49.374386   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:49.374408   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:49.385910   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:49.385928   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:49.440525   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:49.433626    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.434180    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.435811    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.436224    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.437741    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:49.433626    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.434180    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.435811    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.436224    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.437741    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:49.440537   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:49.440549   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:49.501317   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:49.501334   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:52.031253   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:52.041701   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:52.041754   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:52.066302   39074 cri.go:89] found id: ""
	I1002 20:20:52.066315   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.066321   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:52.066325   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:52.066375   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:52.091575   39074 cri.go:89] found id: ""
	I1002 20:20:52.091591   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.091600   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:52.091606   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:52.091674   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:52.115838   39074 cri.go:89] found id: ""
	I1002 20:20:52.115854   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.115861   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:52.115867   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:52.115914   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:52.141387   39074 cri.go:89] found id: ""
	I1002 20:20:52.141402   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.141412   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:52.141417   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:52.141460   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:52.166810   39074 cri.go:89] found id: ""
	I1002 20:20:52.166823   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.166828   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:52.166832   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:52.166872   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:52.192399   39074 cri.go:89] found id: ""
	I1002 20:20:52.192413   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.192420   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:52.192425   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:52.192473   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:52.217364   39074 cri.go:89] found id: ""
	I1002 20:20:52.217378   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.217385   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:52.217391   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:52.217401   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:52.272135   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:52.265457    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.266093    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.267566    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.268058    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.269531    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:52.265457    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.266093    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.267566    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.268058    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.269531    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:52.272144   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:52.272152   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:52.334330   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:52.334352   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:52.364500   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:52.364514   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:52.427683   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:52.427702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:54.939454   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:54.950121   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:54.950174   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:54.975667   39074 cri.go:89] found id: ""
	I1002 20:20:54.975683   39074 logs.go:282] 0 containers: []
	W1002 20:20:54.975692   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:54.975697   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:54.975739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:55.000676   39074 cri.go:89] found id: ""
	I1002 20:20:55.000692   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.000702   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:55.000711   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:55.000772   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:55.025484   39074 cri.go:89] found id: ""
	I1002 20:20:55.025499   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.025509   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:55.025516   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:55.025570   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:55.050548   39074 cri.go:89] found id: ""
	I1002 20:20:55.050562   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.050570   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:55.050576   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:55.050623   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:55.075593   39074 cri.go:89] found id: ""
	I1002 20:20:55.075608   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.075613   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:55.075618   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:55.075683   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:55.100182   39074 cri.go:89] found id: ""
	I1002 20:20:55.100196   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.100202   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:55.100206   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:55.100245   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:55.125869   39074 cri.go:89] found id: ""
	I1002 20:20:55.125883   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.125890   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:55.125898   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:55.125907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:55.194871   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:55.194894   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:55.206048   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:55.206063   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:55.259703   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:55.253143    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.253642    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255145    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255538    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.257050    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:55.253143    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.253642    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255145    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255538    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.257050    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:55.259714   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:55.259723   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:55.319375   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:55.319393   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:57.847993   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:57.858498   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:57.858550   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:57.881390   39074 cri.go:89] found id: ""
	I1002 20:20:57.881404   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.881412   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:57.881416   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:57.881460   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:57.905251   39074 cri.go:89] found id: ""
	I1002 20:20:57.905267   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.905274   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:57.905279   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:57.905318   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:57.931213   39074 cri.go:89] found id: ""
	I1002 20:20:57.931226   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.931233   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:57.931238   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:57.931280   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:57.954527   39074 cri.go:89] found id: ""
	I1002 20:20:57.954544   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.954558   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:57.954564   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:57.954604   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:57.978788   39074 cri.go:89] found id: ""
	I1002 20:20:57.978801   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.978807   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:57.978811   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:57.978861   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:58.004052   39074 cri.go:89] found id: ""
	I1002 20:20:58.004067   39074 logs.go:282] 0 containers: []
	W1002 20:20:58.004075   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:58.004082   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:58.004123   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:58.028322   39074 cri.go:89] found id: ""
	I1002 20:20:58.028335   39074 logs.go:282] 0 containers: []
	W1002 20:20:58.028341   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:58.028348   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:58.028357   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:58.094257   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:58.094275   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:58.105903   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:58.105918   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:58.160072   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:58.153230   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.153795   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155325   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155732   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.157257   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:58.153230   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.153795   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155325   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155732   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.157257   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:58.160081   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:58.160090   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:58.219413   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:58.219430   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:00.748760   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:00.759397   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:00.759452   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:00.783722   39074 cri.go:89] found id: ""
	I1002 20:21:00.783738   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.783747   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:00.783755   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:00.783811   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:00.808536   39074 cri.go:89] found id: ""
	I1002 20:21:00.808552   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.808560   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:00.808565   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:00.808619   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:00.833822   39074 cri.go:89] found id: ""
	I1002 20:21:00.833839   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.833846   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:00.833850   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:00.833893   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:00.857297   39074 cri.go:89] found id: ""
	I1002 20:21:00.857311   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.857317   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:00.857322   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:00.857372   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:00.882563   39074 cri.go:89] found id: ""
	I1002 20:21:00.882578   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.882586   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:00.882592   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:00.882664   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:00.907673   39074 cri.go:89] found id: ""
	I1002 20:21:00.907689   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.907698   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:00.907704   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:00.907746   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:00.932133   39074 cri.go:89] found id: ""
	I1002 20:21:00.932148   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.932156   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:00.932165   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:00.932179   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:01.000177   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:01.000198   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:01.012252   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:01.012267   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:01.068351   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:01.061526   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.062112   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.063638   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.064089   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.065590   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:01.061526   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.062112   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.063638   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.064089   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.065590   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:01.068361   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:01.068370   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:01.128987   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:01.129007   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:03.659911   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:03.670393   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:03.670439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:03.695784   39074 cri.go:89] found id: ""
	I1002 20:21:03.695796   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.695802   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:03.695806   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:03.695846   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:03.720085   39074 cri.go:89] found id: ""
	I1002 20:21:03.720098   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.720104   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:03.720109   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:03.720150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:03.745925   39074 cri.go:89] found id: ""
	I1002 20:21:03.745940   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.745950   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:03.745958   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:03.745996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:03.770616   39074 cri.go:89] found id: ""
	I1002 20:21:03.770632   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.770639   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:03.770655   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:03.770711   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:03.793953   39074 cri.go:89] found id: ""
	I1002 20:21:03.793969   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.793977   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:03.793982   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:03.794028   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:03.818909   39074 cri.go:89] found id: ""
	I1002 20:21:03.818925   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.818933   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:03.818940   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:03.818996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:03.843200   39074 cri.go:89] found id: ""
	I1002 20:21:03.843213   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.843219   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:03.843228   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:03.843237   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:03.901520   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:03.901537   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:03.929305   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:03.929319   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:03.993117   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:03.993134   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:04.004664   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:04.004678   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:04.058624   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:04.051963   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.052457   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.053947   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.054366   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.055857   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:04.051963   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.052457   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.053947   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.054366   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.055857   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:06.560322   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:06.570866   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:06.570909   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:06.594524   39074 cri.go:89] found id: ""
	I1002 20:21:06.594536   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.594542   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:06.594547   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:06.594586   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:06.619717   39074 cri.go:89] found id: ""
	I1002 20:21:06.619730   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.619741   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:06.619747   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:06.619787   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:06.643975   39074 cri.go:89] found id: ""
	I1002 20:21:06.643989   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.643994   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:06.643999   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:06.644051   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:06.667642   39074 cri.go:89] found id: ""
	I1002 20:21:06.667674   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.667683   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:06.667690   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:06.667735   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:06.692383   39074 cri.go:89] found id: ""
	I1002 20:21:06.692398   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.692406   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:06.692411   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:06.692459   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:06.716132   39074 cri.go:89] found id: ""
	I1002 20:21:06.716148   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.716157   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:06.716162   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:06.716206   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:06.740781   39074 cri.go:89] found id: ""
	I1002 20:21:06.740794   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.740800   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:06.740809   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:06.740817   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:06.809048   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:06.809064   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:06.820121   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:06.820134   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:06.873477   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:06.866935   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.867506   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869037   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869480   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.870947   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:06.866935   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.867506   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869037   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869480   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.870947   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:06.873489   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:06.873503   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:06.932869   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:06.932885   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:09.461200   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:09.471453   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:09.471494   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:09.495052   39074 cri.go:89] found id: ""
	I1002 20:21:09.495076   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.495083   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:09.495090   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:09.495142   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:09.520680   39074 cri.go:89] found id: ""
	I1002 20:21:09.520694   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.520699   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:09.520704   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:09.520745   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:09.544279   39074 cri.go:89] found id: ""
	I1002 20:21:09.544292   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.544300   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:09.544305   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:09.544343   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:09.568552   39074 cri.go:89] found id: ""
	I1002 20:21:09.568564   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.568570   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:09.568575   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:09.568636   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:09.593483   39074 cri.go:89] found id: ""
	I1002 20:21:09.593496   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.593504   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:09.593509   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:09.593548   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:09.618504   39074 cri.go:89] found id: ""
	I1002 20:21:09.618518   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.618524   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:09.618529   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:09.618568   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:09.644028   39074 cri.go:89] found id: ""
	I1002 20:21:09.644040   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.644046   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:09.644054   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:09.644068   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:09.709968   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:09.709989   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:09.721282   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:09.721295   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:09.774963   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:09.768383   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.768943   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770534   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770976   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.772525   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:09.768383   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.768943   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770534   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770976   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.772525   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:09.774974   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:09.774985   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:09.833762   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:09.833780   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:12.362468   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:12.372596   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:12.372637   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:12.398178   39074 cri.go:89] found id: ""
	I1002 20:21:12.398193   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.398202   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:12.398208   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:12.398255   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:12.422734   39074 cri.go:89] found id: ""
	I1002 20:21:12.422751   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.422759   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:12.422764   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:12.422806   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:12.446773   39074 cri.go:89] found id: ""
	I1002 20:21:12.446791   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.446799   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:12.446806   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:12.446847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:12.470795   39074 cri.go:89] found id: ""
	I1002 20:21:12.470808   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.470815   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:12.470819   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:12.470858   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:12.494783   39074 cri.go:89] found id: ""
	I1002 20:21:12.494796   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.494801   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:12.494805   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:12.494845   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:12.518163   39074 cri.go:89] found id: ""
	I1002 20:21:12.518177   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.518182   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:12.518187   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:12.518226   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:12.542626   39074 cri.go:89] found id: ""
	I1002 20:21:12.542638   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.542643   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:12.542663   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:12.542679   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:12.553111   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:12.553122   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:12.607093   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:12.600525   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.601040   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602535   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602952   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.604425   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:12.600525   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.601040   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602535   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602952   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.604425   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:12.607103   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:12.607112   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:12.666819   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:12.666837   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:12.694057   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:12.694071   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:15.261212   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:15.271321   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:15.271362   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:15.296775   39074 cri.go:89] found id: ""
	I1002 20:21:15.296788   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.296795   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:15.296799   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:15.296841   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:15.320931   39074 cri.go:89] found id: ""
	I1002 20:21:15.320944   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.320950   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:15.320954   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:15.320996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:15.344685   39074 cri.go:89] found id: ""
	I1002 20:21:15.344698   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.344704   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:15.344709   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:15.344748   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:15.368513   39074 cri.go:89] found id: ""
	I1002 20:21:15.368527   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.368534   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:15.368538   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:15.368605   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:15.392399   39074 cri.go:89] found id: ""
	I1002 20:21:15.392414   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.392422   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:15.392428   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:15.392486   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:15.416043   39074 cri.go:89] found id: ""
	I1002 20:21:15.416056   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.416062   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:15.416066   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:15.416110   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:15.440250   39074 cri.go:89] found id: ""
	I1002 20:21:15.440263   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.440269   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:15.440276   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:15.440285   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:15.467533   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:15.467548   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:15.533766   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:15.533790   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:15.544835   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:15.544851   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:15.599678   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:15.592798   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.593349   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.594871   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.595307   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.596919   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:15.592798   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.593349   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.594871   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.595307   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.596919   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:15.599691   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:15.599702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:18.165132   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:18.175676   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:18.175725   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:18.199922   39074 cri.go:89] found id: ""
	I1002 20:21:18.199940   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.199946   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:18.199951   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:18.199992   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:18.223152   39074 cri.go:89] found id: ""
	I1002 20:21:18.223169   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.223177   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:18.223184   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:18.223227   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:18.246742   39074 cri.go:89] found id: ""
	I1002 20:21:18.246757   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.246766   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:18.246772   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:18.246816   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:18.270031   39074 cri.go:89] found id: ""
	I1002 20:21:18.270044   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.270050   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:18.270055   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:18.270106   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:18.294199   39074 cri.go:89] found id: ""
	I1002 20:21:18.294213   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.294220   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:18.294224   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:18.294265   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:18.319955   39074 cri.go:89] found id: ""
	I1002 20:21:18.319968   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.319974   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:18.319979   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:18.320027   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:18.346187   39074 cri.go:89] found id: ""
	I1002 20:21:18.346202   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.346209   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:18.346218   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:18.346230   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:18.412451   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:18.412469   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:18.423898   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:18.423911   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:18.477273   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:18.470574   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.471135   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.472841   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.473326   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.474859   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:18.470574   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.471135   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.472841   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.473326   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.474859   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:18.477287   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:18.477297   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:18.536355   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:18.536373   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:21.066419   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:21.076563   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:21.076666   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:21.102164   39074 cri.go:89] found id: ""
	I1002 20:21:21.102177   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.102183   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:21.102188   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:21.102232   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:21.129158   39074 cri.go:89] found id: ""
	I1002 20:21:21.129173   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.129182   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:21.129188   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:21.129231   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:21.154477   39074 cri.go:89] found id: ""
	I1002 20:21:21.154492   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.154497   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:21.154502   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:21.154546   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:21.180534   39074 cri.go:89] found id: ""
	I1002 20:21:21.180549   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.180555   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:21.180561   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:21.180620   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:21.206019   39074 cri.go:89] found id: ""
	I1002 20:21:21.206031   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.206038   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:21.206046   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:21.206084   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:21.230114   39074 cri.go:89] found id: ""
	I1002 20:21:21.230127   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.230133   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:21.230138   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:21.230178   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:21.254824   39074 cri.go:89] found id: ""
	I1002 20:21:21.254838   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.254844   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:21.254851   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:21.254860   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:21.317018   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:21.317035   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:21.343844   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:21.343858   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:21.408925   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:21.408944   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:21.419821   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:21.419835   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:21.471978   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:21.465582   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.466082   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467579   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467942   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.469419   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:21.465582   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.466082   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467579   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467942   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.469419   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:23.973621   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:23.984622   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:23.984691   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:24.008789   39074 cri.go:89] found id: ""
	I1002 20:21:24.008805   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.008814   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:24.008820   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:24.008867   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:24.034564   39074 cri.go:89] found id: ""
	I1002 20:21:24.034581   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.034596   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:24.034603   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:24.034643   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:24.059176   39074 cri.go:89] found id: ""
	I1002 20:21:24.059189   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.059194   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:24.059199   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:24.059247   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:24.083475   39074 cri.go:89] found id: ""
	I1002 20:21:24.083488   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.083495   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:24.083499   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:24.083550   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:24.108059   39074 cri.go:89] found id: ""
	I1002 20:21:24.108072   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.108078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:24.108083   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:24.108124   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:24.132959   39074 cri.go:89] found id: ""
	I1002 20:21:24.132973   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.132978   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:24.132983   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:24.133023   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:24.157626   39074 cri.go:89] found id: ""
	I1002 20:21:24.157638   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.157644   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:24.157666   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:24.157677   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:24.222240   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:24.222258   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:24.252463   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:24.252477   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:24.322663   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:24.322681   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:24.334105   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:24.334119   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:24.388449   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:24.381839   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.382360   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.383974   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.384421   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.385999   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:24.381839   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.382360   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.383974   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.384421   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.385999   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:26.890112   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:26.900667   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:26.900710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:26.924781   39074 cri.go:89] found id: ""
	I1002 20:21:26.924794   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.924800   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:26.924805   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:26.924846   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:26.948571   39074 cri.go:89] found id: ""
	I1002 20:21:26.948586   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.948600   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:26.948606   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:26.948661   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:26.972451   39074 cri.go:89] found id: ""
	I1002 20:21:26.972466   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.972472   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:26.972478   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:26.972525   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:26.997499   39074 cri.go:89] found id: ""
	I1002 20:21:26.997512   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.997518   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:26.997523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:26.997572   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:27.022056   39074 cri.go:89] found id: ""
	I1002 20:21:27.022072   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.022078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:27.022083   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:27.022124   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:27.046069   39074 cri.go:89] found id: ""
	I1002 20:21:27.046083   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.046089   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:27.046095   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:27.046135   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:27.070455   39074 cri.go:89] found id: ""
	I1002 20:21:27.070469   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.070475   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:27.070482   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:27.070493   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:27.139300   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:27.139317   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:27.150073   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:27.150086   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:27.203171   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:27.196472   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.196973   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198530   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198931   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.200409   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:27.196472   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.196973   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198530   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198931   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.200409   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:27.203181   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:27.203189   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:27.265474   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:27.265492   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:29.793992   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:29.804235   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:29.804279   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:29.828729   39074 cri.go:89] found id: ""
	I1002 20:21:29.828743   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.828751   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:29.828757   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:29.828809   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:29.853355   39074 cri.go:89] found id: ""
	I1002 20:21:29.853372   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.853382   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:29.853388   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:29.853439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:29.878218   39074 cri.go:89] found id: ""
	I1002 20:21:29.878231   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.878236   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:29.878241   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:29.878281   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:29.903091   39074 cri.go:89] found id: ""
	I1002 20:21:29.903105   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.903114   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:29.903120   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:29.903161   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:29.927692   39074 cri.go:89] found id: ""
	I1002 20:21:29.927710   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.927716   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:29.927720   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:29.927769   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:29.952593   39074 cri.go:89] found id: ""
	I1002 20:21:29.952608   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.952618   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:29.952624   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:29.952693   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:29.977117   39074 cri.go:89] found id: ""
	I1002 20:21:29.977133   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.977140   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:29.977150   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:29.977161   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:30.004687   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:30.004701   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:30.071166   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:30.071188   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:30.082387   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:30.082403   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:30.137131   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:30.130268   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.130846   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132362   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132758   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.134348   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:30.130268   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.130846   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132362   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132758   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.134348   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:30.137140   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:30.137148   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:32.698009   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:32.708134   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:32.708177   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:32.734103   39074 cri.go:89] found id: ""
	I1002 20:21:32.734117   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.734126   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:32.734131   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:32.734179   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:32.758404   39074 cri.go:89] found id: ""
	I1002 20:21:32.758417   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.758423   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:32.758431   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:32.758477   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:32.784135   39074 cri.go:89] found id: ""
	I1002 20:21:32.784150   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.784157   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:32.784161   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:32.784204   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:32.809641   39074 cri.go:89] found id: ""
	I1002 20:21:32.809684   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.809693   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:32.809697   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:32.809739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:32.833831   39074 cri.go:89] found id: ""
	I1002 20:21:32.833847   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.833856   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:32.833862   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:32.833918   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:32.858510   39074 cri.go:89] found id: ""
	I1002 20:21:32.858523   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.858531   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:32.858537   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:32.858590   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:32.882883   39074 cri.go:89] found id: ""
	I1002 20:21:32.882898   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.882907   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:32.882916   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:32.882928   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:32.951104   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:32.951125   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:32.962042   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:32.962058   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:33.015746   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:33.009215   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.009701   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011251   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011629   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.013187   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:33.009215   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.009701   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011251   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011629   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.013187   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:33.015758   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:33.015772   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:33.074804   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:33.074821   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:35.603185   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:35.613834   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:35.613876   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:35.638330   39074 cri.go:89] found id: ""
	I1002 20:21:35.638342   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.638348   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:35.638353   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:35.638391   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:35.661464   39074 cri.go:89] found id: ""
	I1002 20:21:35.661476   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.661482   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:35.661487   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:35.661529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:35.684962   39074 cri.go:89] found id: ""
	I1002 20:21:35.684977   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.684983   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:35.684987   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:35.685036   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:35.708990   39074 cri.go:89] found id: ""
	I1002 20:21:35.709002   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.709007   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:35.709012   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:35.709054   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:35.732099   39074 cri.go:89] found id: ""
	I1002 20:21:35.732116   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.732125   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:35.732134   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:35.732179   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:35.756437   39074 cri.go:89] found id: ""
	I1002 20:21:35.756450   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.756456   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:35.756461   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:35.756501   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:35.782205   39074 cri.go:89] found id: ""
	I1002 20:21:35.782219   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.782225   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:35.782231   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:35.782240   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:35.849923   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:35.849941   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:35.861090   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:35.861104   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:35.914924   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:35.908496   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.908987   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.910547   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.911018   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.912498   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:35.908496   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.908987   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.910547   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.911018   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.912498   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:35.914934   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:35.914943   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:35.975011   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:35.975031   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:38.503369   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:38.513583   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:38.513630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:38.538175   39074 cri.go:89] found id: ""
	I1002 20:21:38.538190   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.538197   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:38.538201   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:38.538239   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:38.562421   39074 cri.go:89] found id: ""
	I1002 20:21:38.562434   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.562440   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:38.562444   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:38.562510   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:38.587376   39074 cri.go:89] found id: ""
	I1002 20:21:38.587388   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.587394   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:38.587400   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:38.587439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:38.611178   39074 cri.go:89] found id: ""
	I1002 20:21:38.611192   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.611198   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:38.611202   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:38.611243   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:38.635805   39074 cri.go:89] found id: ""
	I1002 20:21:38.635817   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.635823   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:38.635827   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:38.635872   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:38.660043   39074 cri.go:89] found id: ""
	I1002 20:21:38.660065   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.660071   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:38.660075   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:38.660115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:38.683490   39074 cri.go:89] found id: ""
	I1002 20:21:38.683502   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.683508   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:38.683515   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:38.683522   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:38.741516   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:38.741534   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:38.769294   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:38.769308   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:38.838736   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:38.838753   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:38.849582   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:38.849612   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:38.903424   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:38.896399   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.896943   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898498   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898964   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.900463   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:38.896399   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.896943   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898498   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898964   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.900463   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:41.405089   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:41.415377   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:41.415426   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:41.440687   39074 cri.go:89] found id: ""
	I1002 20:21:41.440700   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.440707   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:41.440712   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:41.440755   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:41.465054   39074 cri.go:89] found id: ""
	I1002 20:21:41.465075   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.465081   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:41.465086   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:41.465140   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:41.489735   39074 cri.go:89] found id: ""
	I1002 20:21:41.489748   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.489754   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:41.489759   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:41.489799   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:41.514723   39074 cri.go:89] found id: ""
	I1002 20:21:41.514735   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.514740   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:41.514745   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:41.514786   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:41.538573   39074 cri.go:89] found id: ""
	I1002 20:21:41.538586   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.538592   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:41.538597   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:41.538669   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:41.563317   39074 cri.go:89] found id: ""
	I1002 20:21:41.563334   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.563343   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:41.563349   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:41.563389   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:41.587493   39074 cri.go:89] found id: ""
	I1002 20:21:41.587509   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.587515   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:41.587522   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:41.587532   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:41.657445   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:41.657473   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:41.668994   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:41.669012   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:41.722898   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:41.715908   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.716372   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718002   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718454   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.720024   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:41.715908   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.716372   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718002   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718454   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.720024   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:41.722911   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:41.722919   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:41.780887   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:41.780909   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:44.310936   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:44.322755   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:44.322807   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:44.347939   39074 cri.go:89] found id: ""
	I1002 20:21:44.347951   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.347958   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:44.347962   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:44.348004   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:44.372444   39074 cri.go:89] found id: ""
	I1002 20:21:44.372460   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.372466   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:44.372472   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:44.372514   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:44.397131   39074 cri.go:89] found id: ""
	I1002 20:21:44.397148   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.397157   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:44.397163   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:44.397215   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:44.421209   39074 cri.go:89] found id: ""
	I1002 20:21:44.421222   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.421228   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:44.421232   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:44.421269   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:44.445113   39074 cri.go:89] found id: ""
	I1002 20:21:44.445125   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.445131   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:44.445135   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:44.445178   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:44.469164   39074 cri.go:89] found id: ""
	I1002 20:21:44.469178   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.469185   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:44.469191   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:44.469248   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:44.494058   39074 cri.go:89] found id: ""
	I1002 20:21:44.494070   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.494076   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:44.494083   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:44.494091   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:44.563166   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:44.563185   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:44.574587   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:44.574601   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:44.627643   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:44.620697   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.621137   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.622679   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.623151   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.624644   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:44.620697   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.621137   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.622679   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.623151   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.624644   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:44.627670   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:44.627681   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:44.688606   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:44.688623   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:47.218714   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:47.229181   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:47.229224   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:47.254586   39074 cri.go:89] found id: ""
	I1002 20:21:47.254600   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.254607   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:47.254611   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:47.254666   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:47.277466   39074 cri.go:89] found id: ""
	I1002 20:21:47.277479   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.277485   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:47.277489   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:47.277529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:47.300741   39074 cri.go:89] found id: ""
	I1002 20:21:47.300754   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.300759   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:47.300764   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:47.300819   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:47.325015   39074 cri.go:89] found id: ""
	I1002 20:21:47.325030   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.325037   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:47.325042   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:47.325086   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:47.349241   39074 cri.go:89] found id: ""
	I1002 20:21:47.349256   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.349264   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:47.349270   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:47.349322   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:47.373778   39074 cri.go:89] found id: ""
	I1002 20:21:47.373790   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.373796   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:47.373801   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:47.373847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:47.397514   39074 cri.go:89] found id: ""
	I1002 20:21:47.397527   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.397532   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:47.397539   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:47.397550   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:47.452728   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:47.446108   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.446609   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448123   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448540   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.450035   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:47.446108   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.446609   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448123   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448540   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.450035   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:47.452738   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:47.452748   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:47.513401   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:47.513419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:47.542325   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:47.542339   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:47.607380   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:47.607397   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:50.119560   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:50.129969   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:50.130031   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:50.154300   39074 cri.go:89] found id: ""
	I1002 20:21:50.154314   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.154322   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:50.154329   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:50.154381   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:50.178814   39074 cri.go:89] found id: ""
	I1002 20:21:50.178831   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.178840   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:50.178846   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:50.178886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:50.202532   39074 cri.go:89] found id: ""
	I1002 20:21:50.202546   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.202553   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:50.202558   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:50.202597   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:50.227602   39074 cri.go:89] found id: ""
	I1002 20:21:50.227620   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.227630   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:50.227636   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:50.227705   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:50.254467   39074 cri.go:89] found id: ""
	I1002 20:21:50.254479   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.254485   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:50.254490   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:50.254534   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:50.279114   39074 cri.go:89] found id: ""
	I1002 20:21:50.279132   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.279141   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:50.279147   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:50.279196   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:50.303673   39074 cri.go:89] found id: ""
	I1002 20:21:50.303689   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.303695   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:50.303703   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:50.303712   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:50.367227   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:50.367244   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:50.394498   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:50.394517   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:50.463556   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:50.463573   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:50.475248   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:50.475266   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:50.530138   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:50.523630   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.524260   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.525840   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.526247   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.527437   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:50.523630   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.524260   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.525840   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.526247   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.527437   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:53.031819   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:53.042276   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:53.042319   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:53.066835   39074 cri.go:89] found id: ""
	I1002 20:21:53.066850   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.066865   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:53.066872   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:53.066914   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:53.090995   39074 cri.go:89] found id: ""
	I1002 20:21:53.091008   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.091014   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:53.091018   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:53.091057   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:53.116027   39074 cri.go:89] found id: ""
	I1002 20:21:53.116043   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.116051   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:53.116056   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:53.116097   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:53.141627   39074 cri.go:89] found id: ""
	I1002 20:21:53.141640   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.141661   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:53.141668   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:53.141710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:53.167140   39074 cri.go:89] found id: ""
	I1002 20:21:53.167157   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.167163   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:53.167167   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:53.167210   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:53.190437   39074 cri.go:89] found id: ""
	I1002 20:21:53.190453   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.190459   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:53.190464   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:53.190506   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:53.214513   39074 cri.go:89] found id: ""
	I1002 20:21:53.214527   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.214534   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:53.214541   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:53.214550   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:53.282233   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:53.282249   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:53.293348   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:53.293361   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:53.347988   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:53.341334   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.341823   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343307   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343741   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.345249   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:53.341334   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.341823   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343307   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343741   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.345249   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:53.347998   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:53.348008   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:53.407000   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:53.407019   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:55.936592   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:55.946748   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:55.946803   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:55.971330   39074 cri.go:89] found id: ""
	I1002 20:21:55.971347   39074 logs.go:282] 0 containers: []
	W1002 20:21:55.971353   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:55.971358   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:55.971398   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:55.995571   39074 cri.go:89] found id: ""
	I1002 20:21:55.995585   39074 logs.go:282] 0 containers: []
	W1002 20:21:55.995591   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:55.995595   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:55.995635   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:56.020541   39074 cri.go:89] found id: ""
	I1002 20:21:56.020563   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.020573   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:56.020578   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:56.020620   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:56.045458   39074 cri.go:89] found id: ""
	I1002 20:21:56.045474   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.045480   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:56.045485   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:56.045524   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:56.069082   39074 cri.go:89] found id: ""
	I1002 20:21:56.069094   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.069101   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:56.069105   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:56.069150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:56.094402   39074 cri.go:89] found id: ""
	I1002 20:21:56.094417   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.094425   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:56.094430   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:56.094471   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:56.118733   39074 cri.go:89] found id: ""
	I1002 20:21:56.118748   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.118755   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:56.118764   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:56.118776   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:56.186773   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:56.186792   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:56.198306   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:56.198321   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:56.253135   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:56.246592   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.247035   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.248560   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.249003   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.250528   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:56.246592   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.247035   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.248560   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.249003   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.250528   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:56.253144   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:56.253156   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:56.313368   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:56.313384   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:58.841758   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:58.852748   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:58.852795   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:58.878085   39074 cri.go:89] found id: ""
	I1002 20:21:58.878101   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.878109   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:58.878115   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:58.878169   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:58.903034   39074 cri.go:89] found id: ""
	I1002 20:21:58.903047   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.903054   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:58.903058   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:58.903097   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:58.928063   39074 cri.go:89] found id: ""
	I1002 20:21:58.928079   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.928085   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:58.928090   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:58.928132   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:58.953963   39074 cri.go:89] found id: ""
	I1002 20:21:58.953976   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.953982   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:58.953987   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:58.954039   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:58.980346   39074 cri.go:89] found id: ""
	I1002 20:21:58.980363   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.980372   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:58.980379   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:58.980430   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:59.006332   39074 cri.go:89] found id: ""
	I1002 20:21:59.006348   39074 logs.go:282] 0 containers: []
	W1002 20:21:59.006357   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:59.006364   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:59.006422   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:59.030980   39074 cri.go:89] found id: ""
	I1002 20:21:59.030995   39074 logs.go:282] 0 containers: []
	W1002 20:21:59.031004   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:59.031013   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:59.031026   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:59.086481   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:59.079666   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.080350   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.081980   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.082417   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.083935   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:59.079666   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.080350   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.081980   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.082417   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.083935   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:59.086489   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:59.086498   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:59.150520   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:59.150539   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:59.178745   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:59.178759   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:59.248128   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:59.248146   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:01.761244   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:01.771733   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:01.771783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:01.796879   39074 cri.go:89] found id: ""
	I1002 20:22:01.796894   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.796903   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:01.796908   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:01.796951   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:01.822376   39074 cri.go:89] found id: ""
	I1002 20:22:01.822389   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.822395   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:01.822400   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:01.822445   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:01.847608   39074 cri.go:89] found id: ""
	I1002 20:22:01.847622   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.847628   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:01.847633   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:01.847701   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:01.872893   39074 cri.go:89] found id: ""
	I1002 20:22:01.872913   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.872919   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:01.872924   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:01.872995   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:01.899179   39074 cri.go:89] found id: ""
	I1002 20:22:01.899197   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.899205   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:01.899210   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:01.899258   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:01.925133   39074 cri.go:89] found id: ""
	I1002 20:22:01.925149   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.925158   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:01.925165   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:01.925209   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:01.951281   39074 cri.go:89] found id: ""
	I1002 20:22:01.951294   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.951300   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:01.951307   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:01.951316   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:02.008670   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:02.001480   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.002372   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004005   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004404   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.005795   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:02.001480   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.002372   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004005   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004404   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.005795   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:02.008684   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:02.008697   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:02.072947   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:02.072969   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:02.102011   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:02.102027   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:02.168431   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:02.168449   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:04.680455   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:04.690926   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:04.690981   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:04.715368   39074 cri.go:89] found id: ""
	I1002 20:22:04.715384   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.715390   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:04.715394   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:04.715438   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:04.739937   39074 cri.go:89] found id: ""
	I1002 20:22:04.739951   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.739956   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:04.739960   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:04.739998   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:04.763534   39074 cri.go:89] found id: ""
	I1002 20:22:04.763546   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.763552   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:04.763556   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:04.763615   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:04.788497   39074 cri.go:89] found id: ""
	I1002 20:22:04.788512   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.788519   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:04.788523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:04.788571   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:04.813000   39074 cri.go:89] found id: ""
	I1002 20:22:04.813012   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.813018   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:04.813022   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:04.813061   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:04.837324   39074 cri.go:89] found id: ""
	I1002 20:22:04.837336   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.837342   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:04.837347   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:04.837387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:04.863392   39074 cri.go:89] found id: ""
	I1002 20:22:04.863404   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.863410   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:04.863416   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:04.863425   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:04.917001   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:04.910495   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.911044   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912561   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912981   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.914415   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:04.910495   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.911044   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912561   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912981   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.914415   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:04.917008   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:04.917017   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:04.980350   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:04.980366   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:05.007566   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:05.007580   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:05.076403   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:05.076419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:07.589145   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:07.599347   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:07.599390   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:07.623799   39074 cri.go:89] found id: ""
	I1002 20:22:07.623812   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.623818   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:07.623823   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:07.623862   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:07.648210   39074 cri.go:89] found id: ""
	I1002 20:22:07.648222   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.648229   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:07.648233   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:07.648279   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:07.672861   39074 cri.go:89] found id: ""
	I1002 20:22:07.672874   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.672880   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:07.672885   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:07.672933   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:07.696504   39074 cri.go:89] found id: ""
	I1002 20:22:07.696521   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.696530   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:07.696535   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:07.696577   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:07.722324   39074 cri.go:89] found id: ""
	I1002 20:22:07.722340   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.722346   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:07.722351   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:07.722391   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:07.748388   39074 cri.go:89] found id: ""
	I1002 20:22:07.748402   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.748408   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:07.748412   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:07.748449   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:07.773539   39074 cri.go:89] found id: ""
	I1002 20:22:07.773557   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.773564   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:07.773570   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:07.773579   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:07.843853   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:07.843875   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:07.855493   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:07.855511   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:07.909935   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:07.903152   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.903746   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905310   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905743   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.907276   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:07.903152   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.903746   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905310   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905743   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.907276   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:07.909945   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:07.909955   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:07.971055   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:07.971072   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:10.498842   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:10.509052   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:10.509100   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:10.532641   39074 cri.go:89] found id: ""
	I1002 20:22:10.532673   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.532683   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:10.532689   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:10.532737   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:10.555850   39074 cri.go:89] found id: ""
	I1002 20:22:10.555865   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.555872   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:10.555877   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:10.555943   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:10.579608   39074 cri.go:89] found id: ""
	I1002 20:22:10.579623   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.579631   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:10.579636   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:10.579701   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:10.603930   39074 cri.go:89] found id: ""
	I1002 20:22:10.603945   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.603954   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:10.603960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:10.604006   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:10.627050   39074 cri.go:89] found id: ""
	I1002 20:22:10.627063   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.627070   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:10.627074   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:10.627115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:10.650231   39074 cri.go:89] found id: ""
	I1002 20:22:10.650246   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.650254   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:10.650261   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:10.650309   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:10.674381   39074 cri.go:89] found id: ""
	I1002 20:22:10.674396   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.674404   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:10.674413   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:10.674422   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:10.743365   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:10.743388   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:10.754432   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:10.754446   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:10.809037   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:10.802468   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.802995   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804524   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804992   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.806544   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:10.802468   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.802995   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804524   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804992   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.806544   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:10.809051   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:10.809061   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:10.866627   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:10.866642   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:13.395270   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:13.405561   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:13.405603   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:13.429063   39074 cri.go:89] found id: ""
	I1002 20:22:13.429076   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.429081   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:13.429086   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:13.429125   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:13.452589   39074 cri.go:89] found id: ""
	I1002 20:22:13.452604   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.452609   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:13.452613   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:13.452669   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:13.476844   39074 cri.go:89] found id: ""
	I1002 20:22:13.476856   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.476862   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:13.476866   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:13.476905   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:13.501936   39074 cri.go:89] found id: ""
	I1002 20:22:13.501948   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.501955   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:13.501960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:13.502000   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:13.526895   39074 cri.go:89] found id: ""
	I1002 20:22:13.526907   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.526913   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:13.526917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:13.526968   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:13.550888   39074 cri.go:89] found id: ""
	I1002 20:22:13.550902   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.550910   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:13.550914   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:13.550960   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:13.573769   39074 cri.go:89] found id: ""
	I1002 20:22:13.573784   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.573790   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:13.573796   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:13.573807   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:13.626468   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:13.620002   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.620523   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622171   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622562   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.623979   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:13.620002   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.620523   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622171   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622562   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.623979   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:13.626477   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:13.626485   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:13.685732   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:13.685747   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:13.713954   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:13.713970   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:13.785525   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:13.785541   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:16.298756   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:16.309103   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:16.309143   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:16.335506   39074 cri.go:89] found id: ""
	I1002 20:22:16.335521   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.335529   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:16.335535   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:16.335586   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:16.359417   39074 cri.go:89] found id: ""
	I1002 20:22:16.359431   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.359437   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:16.359442   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:16.359482   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:16.383496   39074 cri.go:89] found id: ""
	I1002 20:22:16.383509   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.383517   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:16.383523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:16.383578   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:16.409227   39074 cri.go:89] found id: ""
	I1002 20:22:16.409243   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.409250   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:16.409254   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:16.409294   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:16.433847   39074 cri.go:89] found id: ""
	I1002 20:22:16.433861   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.433870   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:16.433876   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:16.433933   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:16.457278   39074 cri.go:89] found id: ""
	I1002 20:22:16.457293   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.457299   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:16.457306   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:16.457345   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:16.482697   39074 cri.go:89] found id: ""
	I1002 20:22:16.482709   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.482715   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:16.482721   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:16.482730   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:16.548732   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:16.548752   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:16.559732   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:16.559747   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:16.612487   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:16.606170   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.606702   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608183   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608579   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.610023   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:16.606170   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.606702   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608183   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608579   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.610023   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:16.612499   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:16.612511   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:16.671684   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:16.671702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:19.200094   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:19.210479   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:19.210527   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:19.235486   39074 cri.go:89] found id: ""
	I1002 20:22:19.235501   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.235510   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:19.235515   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:19.235560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:19.259294   39074 cri.go:89] found id: ""
	I1002 20:22:19.259305   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.259312   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:19.259316   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:19.259353   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:19.283859   39074 cri.go:89] found id: ""
	I1002 20:22:19.283875   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.283884   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:19.283889   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:19.283941   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:19.307454   39074 cri.go:89] found id: ""
	I1002 20:22:19.307468   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.307473   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:19.307477   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:19.307519   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:19.332321   39074 cri.go:89] found id: ""
	I1002 20:22:19.332334   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.332340   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:19.332345   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:19.332384   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:19.356798   39074 cri.go:89] found id: ""
	I1002 20:22:19.356818   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.356826   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:19.356832   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:19.356886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:19.382609   39074 cri.go:89] found id: ""
	I1002 20:22:19.382624   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.382632   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:19.382641   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:19.382662   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:19.409876   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:19.409890   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:19.476525   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:19.476540   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:19.487600   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:19.487616   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:19.540532   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:19.533762   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.534339   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.535957   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.536415   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.537924   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:19.533762   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.534339   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.535957   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.536415   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.537924   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:19.540541   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:19.540552   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:22.106355   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:22.116499   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:22.116552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:22.142485   39074 cri.go:89] found id: ""
	I1002 20:22:22.142499   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.142507   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:22.142514   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:22.142561   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:22.168287   39074 cri.go:89] found id: ""
	I1002 20:22:22.168301   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.168308   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:22.168312   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:22.168352   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:22.192639   39074 cri.go:89] found id: ""
	I1002 20:22:22.192666   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.192674   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:22.192680   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:22.192726   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:22.217360   39074 cri.go:89] found id: ""
	I1002 20:22:22.217375   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.217383   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:22.217390   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:22.217436   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:22.241729   39074 cri.go:89] found id: ""
	I1002 20:22:22.241744   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.241753   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:22.241759   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:22.241809   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:22.266793   39074 cri.go:89] found id: ""
	I1002 20:22:22.266810   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.266817   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:22.266822   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:22.266866   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:22.289775   39074 cri.go:89] found id: ""
	I1002 20:22:22.289789   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.289794   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:22.289801   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:22.289809   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:22.344340   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:22.337274   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.337797   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339350   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339784   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.341397   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:22.337274   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.337797   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339350   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339784   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.341397   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:22.344350   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:22.344362   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:22.404393   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:22.404410   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:22.432171   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:22.432186   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:22.498216   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:22.498233   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:25.010156   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:25.020516   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:25.020560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:25.045455   39074 cri.go:89] found id: ""
	I1002 20:22:25.045470   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.045480   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:25.045486   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:25.045529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:25.070018   39074 cri.go:89] found id: ""
	I1002 20:22:25.070031   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.070037   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:25.070041   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:25.070080   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:25.093191   39074 cri.go:89] found id: ""
	I1002 20:22:25.093204   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.093210   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:25.093214   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:25.093257   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:25.117770   39074 cri.go:89] found id: ""
	I1002 20:22:25.117782   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.117788   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:25.117793   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:25.117834   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:25.141300   39074 cri.go:89] found id: ""
	I1002 20:22:25.141315   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.141325   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:25.141331   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:25.141383   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:25.165980   39074 cri.go:89] found id: ""
	I1002 20:22:25.165993   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.165999   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:25.166003   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:25.166041   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:25.191730   39074 cri.go:89] found id: ""
	I1002 20:22:25.191742   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.191749   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:25.191757   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:25.191766   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:25.259005   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:25.259025   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:25.270639   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:25.270673   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:25.324592   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:25.317379   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.317967   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.319572   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.320064   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.321617   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:25.317379   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.317967   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.319572   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.320064   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.321617   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:25.324602   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:25.324614   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:25.385501   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:25.385519   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:27.914463   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:27.925227   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:27.925271   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:27.948666   39074 cri.go:89] found id: ""
	I1002 20:22:27.948681   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.948690   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:27.948695   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:27.948735   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:27.972698   39074 cri.go:89] found id: ""
	I1002 20:22:27.972711   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.972716   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:27.972720   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:27.972765   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:27.996954   39074 cri.go:89] found id: ""
	I1002 20:22:27.996970   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.996979   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:27.996984   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:27.997029   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:28.022092   39074 cri.go:89] found id: ""
	I1002 20:22:28.022109   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.022117   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:28.022123   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:28.022164   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:28.047808   39074 cri.go:89] found id: ""
	I1002 20:22:28.047824   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.047831   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:28.047836   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:28.047876   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:28.071793   39074 cri.go:89] found id: ""
	I1002 20:22:28.071807   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.071816   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:28.071822   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:28.071868   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:28.096447   39074 cri.go:89] found id: ""
	I1002 20:22:28.096462   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.096471   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:28.096479   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:28.096489   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:28.107018   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:28.107032   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:28.159925   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:28.153221   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.153766   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155300   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155764   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.157273   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:28.153221   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.153766   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155300   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155764   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.157273   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:28.159935   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:28.159945   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:28.219759   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:28.219776   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:28.247325   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:28.247345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:30.813772   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:30.824079   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:30.824122   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:30.847714   39074 cri.go:89] found id: ""
	I1002 20:22:30.847727   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.847734   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:30.847739   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:30.847783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:30.870579   39074 cri.go:89] found id: ""
	I1002 20:22:30.870612   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.870619   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:30.870623   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:30.870686   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:30.894513   39074 cri.go:89] found id: ""
	I1002 20:22:30.894528   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.894537   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:30.894542   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:30.894591   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:30.919171   39074 cri.go:89] found id: ""
	I1002 20:22:30.919186   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.919191   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:30.919196   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:30.919236   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:30.943990   39074 cri.go:89] found id: ""
	I1002 20:22:30.944003   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.944009   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:30.944013   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:30.944054   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:30.968147   39074 cri.go:89] found id: ""
	I1002 20:22:30.968162   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.968170   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:30.968178   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:30.968227   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:30.991705   39074 cri.go:89] found id: ""
	I1002 20:22:30.991717   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.991722   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:30.991729   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:30.991740   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:31.046303   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:31.038433   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.039034   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.040929   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.041723   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.043230   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:31.038433   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.039034   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.040929   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.041723   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.043230   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:31.046314   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:31.046325   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:31.105380   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:31.105397   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:31.132347   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:31.132363   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:31.202102   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:31.202119   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:33.715172   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:33.725339   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:33.725386   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:33.750520   39074 cri.go:89] found id: ""
	I1002 20:22:33.750534   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.750543   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:33.750549   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:33.750595   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:33.773913   39074 cri.go:89] found id: ""
	I1002 20:22:33.773928   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.773937   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:33.773943   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:33.773991   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:33.797530   39074 cri.go:89] found id: ""
	I1002 20:22:33.797545   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.797554   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:33.797560   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:33.797630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:33.821852   39074 cri.go:89] found id: ""
	I1002 20:22:33.821871   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.821879   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:33.821885   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:33.821934   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:33.846332   39074 cri.go:89] found id: ""
	I1002 20:22:33.846348   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.846356   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:33.846362   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:33.846400   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:33.870615   39074 cri.go:89] found id: ""
	I1002 20:22:33.870629   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.870639   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:33.870657   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:33.870706   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:33.895226   39074 cri.go:89] found id: ""
	I1002 20:22:33.895241   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.895250   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:33.895266   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:33.895276   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:33.955530   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:33.955547   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:33.983183   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:33.983198   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:34.049224   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:34.049251   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:34.060667   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:34.060686   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:34.114666   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:34.107838   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.108343   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.109897   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.110325   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.111840   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:34.107838   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.108343   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.109897   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.110325   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.111840   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:36.616388   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:36.626616   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:36.626688   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:36.652926   39074 cri.go:89] found id: ""
	I1002 20:22:36.652947   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.652957   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:36.652965   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:36.653011   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:36.676048   39074 cri.go:89] found id: ""
	I1002 20:22:36.676060   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.676066   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:36.676071   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:36.676115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:36.700475   39074 cri.go:89] found id: ""
	I1002 20:22:36.700489   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.700499   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:36.700505   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:36.700546   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:36.724541   39074 cri.go:89] found id: ""
	I1002 20:22:36.724559   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.724567   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:36.724576   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:36.724623   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:36.748967   39074 cri.go:89] found id: ""
	I1002 20:22:36.748982   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.748991   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:36.748997   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:36.749043   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:36.773168   39074 cri.go:89] found id: ""
	I1002 20:22:36.773183   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.773191   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:36.773197   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:36.773249   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:36.796981   39074 cri.go:89] found id: ""
	I1002 20:22:36.796997   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.797003   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:36.797011   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:36.797023   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:36.867000   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:36.867018   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:36.878017   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:36.878031   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:36.931114   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:36.924389   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.924871   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926348   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926766   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.928319   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:36.924389   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.924871   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926348   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926766   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.928319   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:36.931129   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:36.931137   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:36.993849   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:36.993868   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:39.524626   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:39.535502   39074 kubeadm.go:601] duration metric: took 4m1.714069333s to restartPrimaryControlPlane
	W1002 20:22:39.535572   39074 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 20:22:39.535638   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:22:39.981011   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:22:39.993244   39074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:22:40.001158   39074 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:22:40.001211   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:22:40.008736   39074 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:22:40.008749   39074 kubeadm.go:157] found existing configuration files:
	
	I1002 20:22:40.008782   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:22:40.015964   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:22:40.016000   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:22:40.022839   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:22:40.030026   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:22:40.030064   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:22:40.036752   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:22:40.043720   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:22:40.043755   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:22:40.050532   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:22:40.057416   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:22:40.057453   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:22:40.063936   39074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:22:40.116427   39074 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:22:40.171173   39074 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:26:42.624936   39074 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:26:42.625021   39074 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:26:42.627908   39074 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:26:42.627954   39074 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:26:42.628043   39074 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:26:42.628106   39074 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:26:42.628137   39074 kubeadm.go:318] OS: Linux
	I1002 20:26:42.628173   39074 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:26:42.628211   39074 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:26:42.628278   39074 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:26:42.628331   39074 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:26:42.628370   39074 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:26:42.628412   39074 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:26:42.628451   39074 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:26:42.628487   39074 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:26:42.628556   39074 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:26:42.628674   39074 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:26:42.628787   39074 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:26:42.628860   39074 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:26:42.630666   39074 out.go:252]   - Generating certificates and keys ...
	I1002 20:26:42.630736   39074 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:26:42.630813   39074 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:26:42.630900   39074 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:26:42.630973   39074 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:26:42.631035   39074 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:26:42.631078   39074 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:26:42.631142   39074 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:26:42.631194   39074 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:26:42.631256   39074 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:26:42.631324   39074 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:26:42.631354   39074 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:26:42.631399   39074 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:26:42.631441   39074 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:26:42.631487   39074 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:26:42.631529   39074 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:26:42.631595   39074 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:26:42.631671   39074 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:26:42.631741   39074 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:26:42.631812   39074 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:26:42.633616   39074 out.go:252]   - Booting up control plane ...
	I1002 20:26:42.633716   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:26:42.633796   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:26:42.633850   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:26:42.633948   39074 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:26:42.634026   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:26:42.634114   39074 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:26:42.634190   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:26:42.634222   39074 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:26:42.634348   39074 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:26:42.634448   39074 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:26:42.634515   39074 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000852315s
	I1002 20:26:42.634627   39074 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:26:42.634725   39074 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:26:42.634809   39074 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:26:42.634907   39074 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:26:42.635026   39074 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000487211s
	I1002 20:26:42.635115   39074 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000480662s
	I1002 20:26:42.635180   39074 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000745923s
	I1002 20:26:42.635185   39074 kubeadm.go:318] 
	I1002 20:26:42.635259   39074 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:26:42.635324   39074 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:26:42.635395   39074 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:26:42.635478   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:26:42.635541   39074 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:26:42.635608   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:26:42.635644   39074 kubeadm.go:318] 
	W1002 20:26:42.635735   39074 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000852315s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000487211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000480662s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000745923s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:26:42.635812   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:26:43.072992   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:26:43.084946   39074 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:26:43.084987   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:26:43.092545   39074 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:26:43.092552   39074 kubeadm.go:157] found existing configuration files:
	
	I1002 20:26:43.092583   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:26:43.099679   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:26:43.099725   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:26:43.106411   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:26:43.113271   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:26:43.113302   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:26:43.120089   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:26:43.126923   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:26:43.126953   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:26:43.133686   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:26:43.140427   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:26:43.140454   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:26:43.147131   39074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:26:43.180956   39074 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:26:43.181017   39074 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:26:43.199951   39074 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:26:43.200009   39074 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:26:43.200037   39074 kubeadm.go:318] OS: Linux
	I1002 20:26:43.200076   39074 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:26:43.200114   39074 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:26:43.200153   39074 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:26:43.200196   39074 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:26:43.200234   39074 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:26:43.200272   39074 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:26:43.200315   39074 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:26:43.200350   39074 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:26:43.254197   39074 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:26:43.254330   39074 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:26:43.254435   39074 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:26:43.260331   39074 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:26:43.264543   39074 out.go:252]   - Generating certificates and keys ...
	I1002 20:26:43.264610   39074 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:26:43.264706   39074 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:26:43.264789   39074 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:26:43.264843   39074 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:26:43.264905   39074 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:26:43.264949   39074 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:26:43.265012   39074 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:26:43.265062   39074 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:26:43.265129   39074 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:26:43.265188   39074 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:26:43.265219   39074 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:26:43.265265   39074 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:26:43.505091   39074 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:26:43.932140   39074 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:26:44.064643   39074 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:26:44.173218   39074 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:26:44.534380   39074 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:26:44.534804   39074 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:26:44.538135   39074 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:26:44.539757   39074 out.go:252]   - Booting up control plane ...
	I1002 20:26:44.539881   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:26:44.539950   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:26:44.540002   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:26:44.553179   39074 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:26:44.553329   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:26:44.559491   39074 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:26:44.559770   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:26:44.559808   39074 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:26:44.659881   39074 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:26:44.660026   39074 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:26:45.660495   39074 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000782032s
	I1002 20:26:45.664397   39074 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:26:45.664522   39074 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:26:45.664595   39074 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:26:45.664676   39074 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:30:45.665391   39074 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	I1002 20:30:45.665506   39074 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	I1002 20:30:45.665618   39074 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	I1002 20:30:45.665634   39074 kubeadm.go:318] 
	I1002 20:30:45.665788   39074 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:30:45.665904   39074 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:30:45.665995   39074 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:30:45.666081   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:30:45.666142   39074 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:30:45.666213   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:30:45.666216   39074 kubeadm.go:318] 
	I1002 20:30:45.669103   39074 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:30:45.669219   39074 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:30:45.669740   39074 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:30:45.669792   39074 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:30:45.669843   39074 kubeadm.go:402] duration metric: took 12m7.882478982s to StartCluster
	I1002 20:30:45.669874   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:30:45.669917   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:30:45.695577   39074 cri.go:89] found id: ""
	I1002 20:30:45.695596   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.695603   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:30:45.695610   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:30:45.695674   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:30:45.719440   39074 cri.go:89] found id: ""
	I1002 20:30:45.719456   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.719464   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:30:45.719469   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:30:45.719511   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:30:45.743166   39074 cri.go:89] found id: ""
	I1002 20:30:45.743181   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.743190   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:30:45.743195   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:30:45.743238   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:30:45.767934   39074 cri.go:89] found id: ""
	I1002 20:30:45.767959   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.767967   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:30:45.767974   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:30:45.768019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:30:45.792091   39074 cri.go:89] found id: ""
	I1002 20:30:45.792102   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.792108   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:30:45.792112   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:30:45.792150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:30:45.815448   39074 cri.go:89] found id: ""
	I1002 20:30:45.815463   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.815469   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:30:45.815475   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:30:45.815518   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:30:45.840287   39074 cri.go:89] found id: ""
	I1002 20:30:45.840299   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.840305   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:30:45.840312   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:30:45.840321   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:30:45.868158   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:30:45.868172   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:30:45.936734   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:30:45.936752   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:30:45.948158   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:30:45.948175   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:30:46.002360   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:30:45.995517   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.996138   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.997668   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.998069   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.999573   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:30:45.995517   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.996138   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.997668   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.998069   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.999573   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:30:46.002381   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:30:46.002392   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1002 20:30:46.065214   39074 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:30:46.065257   39074 out.go:285] * 
	W1002 20:30:46.065383   39074 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:30:46.065406   39074 out.go:285] * 
	W1002 20:30:46.067075   39074 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:30:46.070473   39074 out.go:203] 
	W1002 20:30:46.071639   39074 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:30:46.071666   39074 out.go:285] * 
	I1002 20:30:46.072909   39074 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.578716314Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8edcacbd-1d81-4dc6-8f4f-0fd70bfdb6c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.579524457Z" level=info msg="createCtr: deleting container ID 011458c3484a34a4761c138ce28bea0b5d171a4a446a98a8b6ccbe16d0a221cc from idIndex" id=2ef148b8-b8dc-44a5-ac5b-5c4d911f40c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.579554899Z" level=info msg="createCtr: removing container 011458c3484a34a4761c138ce28bea0b5d171a4a446a98a8b6ccbe16d0a221cc" id=2ef148b8-b8dc-44a5-ac5b-5c4d911f40c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.579581652Z" level=info msg="createCtr: deleting container 011458c3484a34a4761c138ce28bea0b5d171a4a446a98a8b6ccbe16d0a221cc from storage" id=2ef148b8-b8dc-44a5-ac5b-5c4d911f40c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.57987229Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=4477bd8b-de09-41de-aae8-add16bf7fb2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.580027245Z" level=info msg="createCtr: deleting container ID 6d4c64b92b255a273f9b5f60b5c744e62abda7ace9eb8d6b1381ab5d42947186 from idIndex" id=8edcacbd-1d81-4dc6-8f4f-0fd70bfdb6c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.580054529Z" level=info msg="createCtr: removing container 6d4c64b92b255a273f9b5f60b5c744e62abda7ace9eb8d6b1381ab5d42947186" id=8edcacbd-1d81-4dc6-8f4f-0fd70bfdb6c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.580088794Z" level=info msg="createCtr: deleting container 6d4c64b92b255a273f9b5f60b5c744e62abda7ace9eb8d6b1381ab5d42947186 from storage" id=8edcacbd-1d81-4dc6-8f4f-0fd70bfdb6c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.581202962Z" level=info msg="createCtr: deleting container ID 9f3e30af1a945c60f4428061f6cbb4af46ff7b7aa3f4cc4da6d6c8ff909669ac from idIndex" id=4477bd8b-de09-41de-aae8-add16bf7fb2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.581236233Z" level=info msg="createCtr: removing container 9f3e30af1a945c60f4428061f6cbb4af46ff7b7aa3f4cc4da6d6c8ff909669ac" id=4477bd8b-de09-41de-aae8-add16bf7fb2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.581268315Z" level=info msg="createCtr: deleting container 9f3e30af1a945c60f4428061f6cbb4af46ff7b7aa3f4cc4da6d6c8ff909669ac from storage" id=4477bd8b-de09-41de-aae8-add16bf7fb2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.582774964Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753218_kube-system_b932b0024653c86a7ea85a2a83a943a4_0" id=2ef148b8-b8dc-44a5-ac5b-5c4d911f40c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.584231912Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753218_kube-system_91f2b96cb3e3f380ded17f30e8d873bd_0" id=8edcacbd-1d81-4dc6-8f4f-0fd70bfdb6c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.584517216Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753218_kube-system_b25a71e49a335bbe853872de1b1e3093_0" id=4477bd8b-de09-41de-aae8-add16bf7fb2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.546180204Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=9f48774c-99d7-4c53-9acd-56238a58b621 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.547057638Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c4827f6f-404c-4181-82ca-154f80cbb907 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.547869288Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-753218/kube-apiserver" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.548067236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.551126227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.551499586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.565235643Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.566508741Z" level=info msg="createCtr: deleting container ID 21140f6a02459c6df36ecf818ef775e79b3462280bfd0d1f16f28e3316920953 from idIndex" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.566538929Z" level=info msg="createCtr: removing container 21140f6a02459c6df36ecf818ef775e79b3462280bfd0d1f16f28e3316920953" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.566565962Z" level=info msg="createCtr: deleting container 21140f6a02459c6df36ecf818ef775e79b3462280bfd0d1f16f28e3316920953 from storage" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.568315977Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_802f0aebed1bb3dd62306b1d2076fd94_0" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:30:47.129882   15690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:47.130377   15690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:47.131884   15690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:47.132319   15690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:47.133800   15690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:30:47 up  1:13,  0 user,  load average: 0.00, 0.04, 0.07
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:30:39 functional-753218 kubelet[14925]:  > podSandboxID="938004d98ea751eb2eeff411184915e21872d6d9720257a5999ef0864a9cbb1c"
	Oct 02 20:30:39 functional-753218 kubelet[14925]: E1002 20:30:39.584538   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:39 functional-753218 kubelet[14925]:         container etcd start failed in pod etcd-functional-753218_kube-system(91f2b96cb3e3f380ded17f30e8d873bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:39 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:39 functional-753218 kubelet[14925]: E1002 20:30:39.584575   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753218" podUID="91f2b96cb3e3f380ded17f30e8d873bd"
	Oct 02 20:30:39 functional-753218 kubelet[14925]: E1002 20:30:39.584728   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:39 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:39 functional-753218 kubelet[14925]:  > podSandboxID="6ae6de7d398fa442f7f140a6767c4de14fdad57319542a7b5e3df53c8ac49d18"
	Oct 02 20:30:39 functional-753218 kubelet[14925]: E1002 20:30:39.584795   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:39 functional-753218 kubelet[14925]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:39 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:39 functional-753218 kubelet[14925]: E1002 20:30:39.585963   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.168537   14925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: I1002 20:30:42.321168   14925 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.321508   14925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.545784   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.568537   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:42 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:42 functional-753218 kubelet[14925]:  > podSandboxID="7a2fde0baea214f3eb0043d508edd186efa5f3f087d902573e164eb4765f9b5b"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.568614   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:42 functional-753218 kubelet[14925]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(802f0aebed1bb3dd62306b1d2076fd94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:42 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.568640   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="802f0aebed1bb3dd62306b1d2076fd94"
	Oct 02 20:30:45 functional-753218 kubelet[14925]: E1002 20:30:45.563684   14925 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753218\" not found"
	Oct 02 20:30:46 functional-753218 kubelet[14925]: E1002 20:30:46.169281   14925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac673e6f5d5d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753218 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,LastTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753218,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (294.167872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (733.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-753218 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-753218 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (46.646343ms)

                                                
                                                
** stderr ** 
	E1002 20:30:47.822179   52296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:47.823005   52296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:47.823634   52296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:47.825018   52296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:47.825245   52296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-753218 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (275.628817ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ unpause │ nospam-547008 --log_dir /tmp/nospam-547008 unpause                                                            │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ stop    │ nospam-547008 --log_dir /tmp/nospam-547008 stop                                                               │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ delete  │ -p nospam-547008                                                                                              │ nospam-547008     │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │ 02 Oct 25 20:03 UTC │
	│ start   │ -p functional-753218 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:03 UTC │                     │
	│ start   │ -p functional-753218 --alsologtostderr -v=8                                                                   │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:12 UTC │                     │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:3.1                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:3.3                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add registry.k8s.io/pause:latest                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache add minikube-local-cache-test:functional-753218                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ functional-753218 cache delete minikube-local-cache-test:functional-753218                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl images                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ cache   │ functional-753218 cache reload                                                                                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ kubectl │ functional-753218 kubectl -- --context functional-753218 get pods                                             │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ start   │ -p functional-753218 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:34.206207   39074 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:34.206493   39074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:34.206497   39074 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:34.206500   39074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:34.206690   39074 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:18:34.207119   39074 out.go:368] Setting JSON to false
	I1002 20:18:34.208025   39074 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3663,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:18:34.208099   39074 start.go:140] virtualization: kvm guest
	I1002 20:18:34.211076   39074 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:18:34.212342   39074 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:18:34.212345   39074 notify.go:221] Checking for updates...
	I1002 20:18:34.213685   39074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:34.214912   39074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:18:34.216075   39074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:18:34.217175   39074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:18:34.218365   39074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:18:34.219862   39074 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:18:34.219970   39074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:18:34.243293   39074 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:18:34.243370   39074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:34.294846   39074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:18:34.285071909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:18:34.294933   39074 docker.go:319] overlay module found
	I1002 20:18:34.296853   39074 out.go:179] * Using the docker driver based on existing profile
	I1002 20:18:34.297994   39074 start.go:306] selected driver: docker
	I1002 20:18:34.298010   39074 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:34.298070   39074 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:18:34.298154   39074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:34.347576   39074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:18:34.338434102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:18:34.348199   39074 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:18:34.348218   39074 cni.go:84] Creating CNI manager for ""
	I1002 20:18:34.348268   39074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:34.348308   39074 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:34.350240   39074 out.go:179] * Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	I1002 20:18:34.351573   39074 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:18:34.353042   39074 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:18:34.354380   39074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:34.354407   39074 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:18:34.354414   39074 cache.go:59] Caching tarball of preloaded images
	I1002 20:18:34.354480   39074 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:18:34.354514   39074 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:18:34.354521   39074 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:18:34.354600   39074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:18:34.373723   39074 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:18:34.373737   39074 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:18:34.373750   39074 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:18:34.373779   39074 start.go:361] acquireMachinesLock for functional-753218: {Name:mk742badf6f1dbafca8397e398143e758831ae3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:18:34.373825   39074 start.go:365] duration metric: took 33.687µs to acquireMachinesLock for "functional-753218"
	I1002 20:18:34.373838   39074 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:18:34.373845   39074 fix.go:55] fixHost starting: 
	I1002 20:18:34.374037   39074 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:18:34.391194   39074 fix.go:113] recreateIfNeeded on functional-753218: state=Running err=<nil>
	W1002 20:18:34.391212   39074 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:18:34.393102   39074 out.go:252] * Updating the running docker "functional-753218" container ...
	I1002 20:18:34.393135   39074 machine.go:93] provisionDockerMachine start ...
	I1002 20:18:34.393196   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.410850   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.411066   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.411072   39074 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:18:34.552329   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:18:34.552359   39074 ubuntu.go:182] provisioning hostname "functional-753218"
	I1002 20:18:34.552416   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.570052   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.570307   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.570319   39074 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753218 && echo "functional-753218" | sudo tee /etc/hostname
	I1002 20:18:34.721441   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:18:34.721512   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.738897   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.739113   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.739125   39074 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753218/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:18:34.881059   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:18:34.881084   39074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:18:34.881113   39074 ubuntu.go:190] setting up certificates
	I1002 20:18:34.881121   39074 provision.go:84] configureAuth start
	I1002 20:18:34.881164   39074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:18:34.899501   39074 provision.go:143] copyHostCerts
	I1002 20:18:34.899560   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:18:34.899574   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:18:34.899678   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:18:34.899811   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:18:34.899820   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:18:34.899861   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:18:34.899952   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:18:34.899957   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:18:34.899992   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:18:34.900070   39074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.functional-753218 san=[127.0.0.1 192.168.49.2 functional-753218 localhost minikube]
	I1002 20:18:35.209717   39074 provision.go:177] copyRemoteCerts
	I1002 20:18:35.209761   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:18:35.209800   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.226488   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.326447   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:18:35.342793   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:18:35.359162   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:18:35.375197   39074 provision.go:87] duration metric: took 494.066038ms to configureAuth
	I1002 20:18:35.375214   39074 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:18:35.375353   39074 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:18:35.375460   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.392271   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:35.392535   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:35.392555   39074 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:18:35.662001   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:18:35.662017   39074 machine.go:96] duration metric: took 1.268875772s to provisionDockerMachine
	I1002 20:18:35.662029   39074 start.go:294] postStartSetup for "functional-753218" (driver="docker")
	I1002 20:18:35.662042   39074 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:18:35.662106   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:18:35.662147   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.679558   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.779752   39074 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:18:35.783115   39074 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:18:35.783131   39074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:18:35.783153   39074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:18:35.783280   39074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:18:35.783385   39074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:18:35.783488   39074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> hosts in /etc/test/nested/copy/12851
	I1002 20:18:35.783529   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12851
	I1002 20:18:35.791362   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:18:35.807703   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts --> /etc/test/nested/copy/12851/hosts (40 bytes)
	I1002 20:18:35.824578   39074 start.go:297] duration metric: took 162.536937ms for postStartSetup
	I1002 20:18:35.824707   39074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:18:35.824741   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.842117   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.939428   39074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:18:35.943787   39074 fix.go:57] duration metric: took 1.569934708s for fixHost
	I1002 20:18:35.943804   39074 start.go:84] releasing machines lock for "functional-753218", held for 1.569972452s
	I1002 20:18:35.943864   39074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:18:35.960772   39074 ssh_runner.go:195] Run: cat /version.json
	I1002 20:18:35.960815   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.960859   39074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:18:35.960900   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.978069   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.978425   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:36.126122   39074 ssh_runner.go:195] Run: systemctl --version
	I1002 20:18:36.132369   39074 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:18:36.165368   39074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:18:36.169751   39074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:18:36.169819   39074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:18:36.177394   39074 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:18:36.177405   39074 start.go:496] detecting cgroup driver to use...
	I1002 20:18:36.177434   39074 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:18:36.177487   39074 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:18:36.191941   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:18:36.203333   39074 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:18:36.203390   39074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:18:36.216968   39074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:18:36.228214   39074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:18:36.308949   39074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:18:36.392928   39074 docker.go:234] disabling docker service ...
	I1002 20:18:36.392976   39074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:18:36.406808   39074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:18:36.418402   39074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:18:36.501067   39074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:18:36.583824   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:18:36.595669   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:18:36.609110   39074 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:18:36.609154   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.617194   39074 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:18:36.617240   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.625324   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.633155   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.641048   39074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:18:36.648837   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.656786   39074 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.664478   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.672362   39074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:18:36.678936   39074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:18:36.685474   39074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:18:36.766185   39074 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:18:36.872474   39074 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:18:36.872521   39074 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:18:36.876161   39074 start.go:564] Will wait 60s for crictl version
	I1002 20:18:36.876199   39074 ssh_runner.go:195] Run: which crictl
	I1002 20:18:36.879320   39074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:18:36.901521   39074 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:18:36.901576   39074 ssh_runner.go:195] Run: crio --version
	I1002 20:18:36.927454   39074 ssh_runner.go:195] Run: crio --version
	I1002 20:18:36.955669   39074 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:18:36.956820   39074 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:18:36.973453   39074 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:18:36.979247   39074 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:18:36.980537   39074 kubeadm.go:883] updating cluster {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:18:36.980633   39074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:36.980707   39074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:18:37.012555   39074 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:18:37.012566   39074 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:18:37.012602   39074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:18:37.037114   39074 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:18:37.037125   39074 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:18:37.037130   39074 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:18:37.037235   39074 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:18:37.037301   39074 ssh_runner.go:195] Run: crio config
	I1002 20:18:37.080633   39074 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:18:37.080675   39074 cni.go:84] Creating CNI manager for ""
	I1002 20:18:37.080685   39074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:37.080697   39074 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:18:37.080715   39074 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753218 NodeName:functional-753218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:18:37.080819   39074 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:18:37.080866   39074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:18:37.088458   39074 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:18:37.088499   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:18:37.095835   39074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:18:37.107722   39074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:18:37.119278   39074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 20:18:37.130821   39074 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:18:37.134590   39074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:18:37.217285   39074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:18:37.229402   39074 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218 for IP: 192.168.49.2
	I1002 20:18:37.229423   39074 certs.go:195] generating shared ca certs ...
	I1002 20:18:37.229445   39074 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:18:37.229580   39074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:18:37.229612   39074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:18:37.229635   39074 certs.go:257] generating profile certs ...
	I1002 20:18:37.229744   39074 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key
	I1002 20:18:37.229781   39074 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804
	I1002 20:18:37.229820   39074 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key
	I1002 20:18:37.229920   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:18:37.229944   39074 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:18:37.229949   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:18:37.229969   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:18:37.229988   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:18:37.230004   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:18:37.230036   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:18:37.230546   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:18:37.247164   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:18:37.262985   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:18:37.279026   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:18:37.294907   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:18:37.311017   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:18:37.326759   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:18:37.342531   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:18:37.358985   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:18:37.375049   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:18:37.390853   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:18:37.406776   39074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:18:37.418137   39074 ssh_runner.go:195] Run: openssl version
	I1002 20:18:37.423758   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:18:37.431400   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.434759   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.434796   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.469193   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:18:37.476976   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:18:37.484860   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.488438   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.488489   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.521688   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:18:37.529613   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:18:37.537558   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.541046   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.541078   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.574961   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:18:37.582802   39074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:18:37.586377   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:18:37.620185   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:18:37.653623   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:18:37.686983   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:18:37.720317   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:18:37.753617   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:18:37.787371   39074 kubeadm.go:400] StartCluster: {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:37.787431   39074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:18:37.787474   39074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:18:37.813804   39074 cri.go:89] found id: ""
	I1002 20:18:37.813849   39074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:18:37.821398   39074 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:18:37.821423   39074 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:18:37.821468   39074 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:18:37.828438   39074 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.828913   39074 kubeconfig.go:125] found "functional-753218" server: "https://192.168.49.2:8441"
	I1002 20:18:37.830019   39074 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:18:37.837252   39074 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:04:06.241851372 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:18:37.128983250 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:18:37.837272   39074 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:18:37.837284   39074 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:18:37.837326   39074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:18:37.863302   39074 cri.go:89] found id: ""
	I1002 20:18:37.863361   39074 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:18:37.911147   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:18:37.918894   39074 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  2 20:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 20:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:08 /etc/kubernetes/scheduler.conf
	
	I1002 20:18:37.918950   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:18:37.926065   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:18:37.933031   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.933065   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:18:37.939972   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:18:37.946875   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.946911   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:18:37.953620   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:18:37.960544   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.960573   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:18:37.967317   39074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:18:37.974311   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:38.013321   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.074022   39074 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060677583s)
	I1002 20:18:39.074075   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.228791   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.281116   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.328956   39074 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:18:39.329020   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:39.829304   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:40.329782   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:40.830022   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:41.329834   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:41.829218   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:42.329847   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:42.829333   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:43.329809   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:43.829522   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:44.329493   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:44.829576   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:45.329166   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:45.829738   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:46.329491   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:46.829212   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:47.330127   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:47.829175   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:48.329888   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:48.829745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:49.330019   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:49.829990   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:50.330054   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:50.829373   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:51.330102   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:51.829486   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:52.329898   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:52.829160   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:53.329735   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:53.829783   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:54.329822   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:54.829468   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:55.329274   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:55.829515   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:56.329151   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:56.829940   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:57.329721   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:57.829433   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:58.329165   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:58.829113   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:59.329101   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:59.829897   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:00.329742   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:00.829770   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:01.329988   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:01.830082   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:02.329237   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:02.829922   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:03.330132   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:03.829921   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:04.329162   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:04.829124   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:05.329748   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:05.829595   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:06.329426   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:06.829387   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:07.329567   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:07.830080   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:08.329899   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:08.829745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:09.329666   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:09.829758   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:10.329818   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:10.829090   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:11.329880   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:11.829546   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:12.329286   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:12.830050   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:13.329756   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:13.829521   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:14.329346   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:14.829881   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:15.329641   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:15.829463   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:16.329288   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:16.829123   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:17.329880   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:17.829643   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:18.329839   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:18.829576   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:19.329600   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:19.829397   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:20.329443   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:20.829214   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:21.329827   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:21.829216   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:22.329884   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:22.829410   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:23.329734   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:23.829124   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:24.330092   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:24.829862   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:25.329373   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:25.829486   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:26.329987   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:26.829953   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:27.330064   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:27.829775   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:28.329834   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:28.829394   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:29.329185   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:29.829478   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:30.329460   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:30.829312   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:31.330076   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:31.829866   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:32.329434   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:32.829588   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:33.329475   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:33.829203   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:34.329105   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:34.829918   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:35.329741   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:35.829625   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:36.329350   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:36.829147   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:37.329144   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:37.829141   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:38.329884   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:38.829677   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:39.329725   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:39.329777   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:39.355028   39074 cri.go:89] found id: ""
	I1002 20:19:39.355041   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.355048   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:39.355053   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:39.355092   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:39.380001   39074 cri.go:89] found id: ""
	I1002 20:19:39.380017   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.380026   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:39.380031   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:39.380090   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:39.405251   39074 cri.go:89] found id: ""
	I1002 20:19:39.405267   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.405273   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:39.405277   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:39.405321   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:39.430719   39074 cri.go:89] found id: ""
	I1002 20:19:39.430732   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.430739   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:39.430745   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:39.430794   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:39.454916   39074 cri.go:89] found id: ""
	I1002 20:19:39.454929   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.454936   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:39.454940   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:39.454981   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:39.478922   39074 cri.go:89] found id: ""
	I1002 20:19:39.478934   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.478940   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:39.478944   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:39.478983   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:39.503714   39074 cri.go:89] found id: ""
	I1002 20:19:39.503731   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.503739   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:39.503749   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:39.503760   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:39.573887   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:39.573907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:39.585174   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:39.585191   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:39.639301   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:39.632705    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.633125    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.634665    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.635127    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.636638    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:39.632705    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.633125    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.634665    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.635127    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.636638    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:39.639313   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:39.639322   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:39.699438   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:39.699455   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:42.228926   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:42.239185   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:42.239234   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:42.263214   39074 cri.go:89] found id: ""
	I1002 20:19:42.263230   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.263238   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:42.263245   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:42.263288   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:42.286996   39074 cri.go:89] found id: ""
	I1002 20:19:42.287009   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.287014   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:42.287019   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:42.287059   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:42.311539   39074 cri.go:89] found id: ""
	I1002 20:19:42.311555   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.311563   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:42.311568   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:42.311608   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:42.335720   39074 cri.go:89] found id: ""
	I1002 20:19:42.335735   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.335740   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:42.335744   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:42.335789   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:42.359620   39074 cri.go:89] found id: ""
	I1002 20:19:42.359635   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.359642   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:42.359658   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:42.359717   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:42.383670   39074 cri.go:89] found id: ""
	I1002 20:19:42.383684   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.383702   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:42.383708   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:42.383752   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:42.409324   39074 cri.go:89] found id: ""
	I1002 20:19:42.409337   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.409343   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:42.409350   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:42.409358   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:42.463480   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:42.456002    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.456468    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.458629    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.459138    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.460809    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:42.456002    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.456468    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.458629    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.459138    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.460809    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:42.463498   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:42.463508   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:42.522978   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:42.522994   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:42.550529   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:42.550544   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:42.618426   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:42.618446   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:45.130475   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:45.140935   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:45.140984   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:45.166296   39074 cri.go:89] found id: ""
	I1002 20:19:45.166307   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.166313   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:45.166318   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:45.166370   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:45.190669   39074 cri.go:89] found id: ""
	I1002 20:19:45.190684   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.190690   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:45.190694   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:45.190748   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:45.215836   39074 cri.go:89] found id: ""
	I1002 20:19:45.215861   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.215866   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:45.215870   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:45.215911   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:45.240020   39074 cri.go:89] found id: ""
	I1002 20:19:45.240032   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.240037   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:45.240054   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:45.240103   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:45.265411   39074 cri.go:89] found id: ""
	I1002 20:19:45.265424   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.265430   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:45.265434   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:45.265482   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:45.289247   39074 cri.go:89] found id: ""
	I1002 20:19:45.289262   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.289272   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:45.289277   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:45.289327   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:45.313127   39074 cri.go:89] found id: ""
	I1002 20:19:45.313142   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.313149   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:45.313157   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:45.313175   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:45.383170   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:45.383189   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:45.394492   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:45.394506   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:45.448758   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:45.441841    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.442413    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.443998    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.444386    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.445933    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:45.441841    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.442413    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.443998    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.444386    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.445933    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:45.448771   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:45.448780   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:45.512497   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:45.512515   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:48.041482   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:48.051591   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:48.051635   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:48.076424   39074 cri.go:89] found id: ""
	I1002 20:19:48.076441   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.076449   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:48.076454   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:48.076499   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:48.100297   39074 cri.go:89] found id: ""
	I1002 20:19:48.100324   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.100330   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:48.100334   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:48.100378   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:48.124828   39074 cri.go:89] found id: ""
	I1002 20:19:48.124845   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.124854   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:48.124860   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:48.124916   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:48.148977   39074 cri.go:89] found id: ""
	I1002 20:19:48.148991   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.148998   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:48.149002   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:48.149045   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:48.172962   39074 cri.go:89] found id: ""
	I1002 20:19:48.172978   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.172987   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:48.172992   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:48.173078   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:48.196028   39074 cri.go:89] found id: ""
	I1002 20:19:48.196047   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.196056   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:48.196063   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:48.196116   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:48.219489   39074 cri.go:89] found id: ""
	I1002 20:19:48.219506   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.219514   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:48.219524   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:48.219535   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:48.285750   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:48.285767   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:48.296759   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:48.296773   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:48.350552   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:48.343634    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.344266    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.345849    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.346274    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.347827    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:48.343634    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.344266    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.345849    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.346274    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.347827    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:48.350562   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:48.350570   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:48.415152   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:48.415174   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:50.944831   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:50.955007   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:50.955051   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:50.979562   39074 cri.go:89] found id: ""
	I1002 20:19:50.979574   39074 logs.go:282] 0 containers: []
	W1002 20:19:50.979580   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:50.979586   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:50.979626   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:51.005726   39074 cri.go:89] found id: ""
	I1002 20:19:51.005738   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.005744   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:51.005748   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:51.005789   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:51.029734   39074 cri.go:89] found id: ""
	I1002 20:19:51.029751   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.029760   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:51.029766   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:51.029810   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:51.053889   39074 cri.go:89] found id: ""
	I1002 20:19:51.053904   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.053912   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:51.053918   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:51.053970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:51.080377   39074 cri.go:89] found id: ""
	I1002 20:19:51.080389   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.080394   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:51.080399   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:51.080438   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:51.105307   39074 cri.go:89] found id: ""
	I1002 20:19:51.105321   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.105326   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:51.105331   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:51.105371   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:51.130666   39074 cri.go:89] found id: ""
	I1002 20:19:51.130682   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.130689   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:51.130700   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:51.130710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:51.141518   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:51.141533   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:51.194182   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:51.187772    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.188306    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.189890    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.190325    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.191812    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:51.187772    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.188306    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.189890    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.190325    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.191812    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:51.194195   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:51.194204   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:51.253875   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:51.253894   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:51.281673   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:51.281693   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:53.847012   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:53.857350   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:53.857394   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:53.882278   39074 cri.go:89] found id: ""
	I1002 20:19:53.882291   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.882297   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:53.882309   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:53.882351   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:53.906222   39074 cri.go:89] found id: ""
	I1002 20:19:53.906235   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.906241   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:53.906245   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:53.906294   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:53.930975   39074 cri.go:89] found id: ""
	I1002 20:19:53.930988   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.930995   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:53.930999   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:53.931045   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:53.957875   39074 cri.go:89] found id: ""
	I1002 20:19:53.957891   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.957901   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:53.957907   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:53.958019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:53.982116   39074 cri.go:89] found id: ""
	I1002 20:19:53.982129   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.982135   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:53.982140   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:53.982181   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:54.006296   39074 cri.go:89] found id: ""
	I1002 20:19:54.006310   39074 logs.go:282] 0 containers: []
	W1002 20:19:54.006316   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:54.006320   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:54.006360   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:54.031088   39074 cri.go:89] found id: ""
	I1002 20:19:54.031102   39074 logs.go:282] 0 containers: []
	W1002 20:19:54.031108   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:54.031116   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:54.031125   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:54.041909   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:54.041951   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:54.095399   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:54.088843    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.089263    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.090810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.091232    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.092782    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:54.088843    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.089263    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.090810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.091232    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.092782    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:54.095411   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:54.095438   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:54.159991   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:54.160010   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:54.187642   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:54.187676   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:56.757287   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:56.768252   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:56.768293   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:56.793773   39074 cri.go:89] found id: ""
	I1002 20:19:56.793785   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.793791   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:56.793796   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:56.793841   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:56.819484   39074 cri.go:89] found id: ""
	I1002 20:19:56.819499   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.819509   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:56.819516   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:56.819558   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:56.844773   39074 cri.go:89] found id: ""
	I1002 20:19:56.844787   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.844793   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:56.844798   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:56.844838   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:56.869847   39074 cri.go:89] found id: ""
	I1002 20:19:56.869888   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.869898   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:56.869906   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:56.869956   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:56.894519   39074 cri.go:89] found id: ""
	I1002 20:19:56.894537   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.894545   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:56.894553   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:56.894613   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:56.920670   39074 cri.go:89] found id: ""
	I1002 20:19:56.920689   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.920698   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:56.920706   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:56.920758   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:56.945515   39074 cri.go:89] found id: ""
	I1002 20:19:56.945529   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.945535   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:56.945543   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:56.945557   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:57.001311   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:56.994723    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.995244    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.996779    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.997235    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.998722    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:56.994723    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.995244    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.996779    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.997235    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.998722    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:57.001323   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:57.001332   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:57.065838   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:57.065856   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:57.093387   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:57.093401   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:57.161709   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:57.161730   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:59.673972   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:59.684279   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:59.684321   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:59.708892   39074 cri.go:89] found id: ""
	I1002 20:19:59.708905   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.708911   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:59.708915   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:59.708958   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:59.733806   39074 cri.go:89] found id: ""
	I1002 20:19:59.733821   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.733828   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:59.733834   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:59.733886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:59.758895   39074 cri.go:89] found id: ""
	I1002 20:19:59.758907   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.758913   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:59.758918   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:59.758970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:59.782140   39074 cri.go:89] found id: ""
	I1002 20:19:59.782154   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.782161   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:59.782166   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:59.782211   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:59.806783   39074 cri.go:89] found id: ""
	I1002 20:19:59.806797   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.806803   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:59.806808   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:59.806851   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:59.831636   39074 cri.go:89] found id: ""
	I1002 20:19:59.831663   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.831673   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:59.831679   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:59.831725   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:59.855094   39074 cri.go:89] found id: ""
	I1002 20:19:59.855110   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.855119   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:59.855128   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:59.855139   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:59.916579   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:59.916598   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:59.944216   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:59.944230   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:00.010694   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:00.010712   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:00.021993   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:00.022008   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:00.076257   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:00.069139    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.069711    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071246    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071701    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.073412    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:00.069139    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.069711    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071246    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071701    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.073412    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:02.577956   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:02.588476   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:02.588521   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:02.612197   39074 cri.go:89] found id: ""
	I1002 20:20:02.612213   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.612224   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:02.612231   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:02.612283   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:02.636711   39074 cri.go:89] found id: ""
	I1002 20:20:02.636727   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.636737   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:02.636743   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:02.636797   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:02.660364   39074 cri.go:89] found id: ""
	I1002 20:20:02.660380   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.660389   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:02.660396   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:02.660448   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:02.684665   39074 cri.go:89] found id: ""
	I1002 20:20:02.684682   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.684689   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:02.684694   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:02.684739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:02.710226   39074 cri.go:89] found id: ""
	I1002 20:20:02.710239   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.710247   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:02.710254   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:02.710308   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:02.735247   39074 cri.go:89] found id: ""
	I1002 20:20:02.735262   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.735271   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:02.735278   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:02.735328   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:02.760072   39074 cri.go:89] found id: ""
	I1002 20:20:02.760085   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.760091   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:02.760098   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:02.760106   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:02.824182   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:02.824200   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:02.835284   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:02.835297   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:02.888320   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:02.881490    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.881999    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883536    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883961    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.885446    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:02.881490    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.881999    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883536    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883961    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.885446    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:02.888330   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:02.888339   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:02.952125   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:02.952145   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:05.481086   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:05.491660   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:05.491723   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:05.517036   39074 cri.go:89] found id: ""
	I1002 20:20:05.517052   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.517060   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:05.517067   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:05.517114   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:05.542299   39074 cri.go:89] found id: ""
	I1002 20:20:05.542312   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.542320   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:05.542326   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:05.542387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:05.567213   39074 cri.go:89] found id: ""
	I1002 20:20:05.567227   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.567233   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:05.567238   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:05.567286   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:05.590782   39074 cri.go:89] found id: ""
	I1002 20:20:05.590795   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.590801   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:05.590807   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:05.590850   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:05.615825   39074 cri.go:89] found id: ""
	I1002 20:20:05.615837   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.615843   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:05.615849   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:05.615886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:05.640124   39074 cri.go:89] found id: ""
	I1002 20:20:05.640137   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.640143   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:05.640148   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:05.640191   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:05.664435   39074 cri.go:89] found id: ""
	I1002 20:20:05.664451   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.664460   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:05.664469   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:05.664478   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:05.675270   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:05.675284   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:05.728958   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:05.722310    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.722829    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724378    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724835    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.726322    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:05.722310    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.722829    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724378    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724835    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.726322    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:05.728968   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:05.728977   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:05.789744   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:05.789763   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:05.816871   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:05.816886   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:08.386603   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:08.396838   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:08.396887   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:08.421504   39074 cri.go:89] found id: ""
	I1002 20:20:08.421516   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.421526   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:08.421531   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:08.421573   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:08.445525   39074 cri.go:89] found id: ""
	I1002 20:20:08.445539   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.445551   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:08.445557   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:08.445611   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:08.473912   39074 cri.go:89] found id: ""
	I1002 20:20:08.473926   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.473932   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:08.473937   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:08.473977   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:08.498551   39074 cri.go:89] found id: ""
	I1002 20:20:08.498567   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.498575   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:08.498579   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:08.498619   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:08.522969   39074 cri.go:89] found id: ""
	I1002 20:20:08.522985   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.522991   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:08.522996   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:08.523041   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:08.546557   39074 cri.go:89] found id: ""
	I1002 20:20:08.546572   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.546579   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:08.546583   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:08.546628   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:08.570570   39074 cri.go:89] found id: ""
	I1002 20:20:08.570586   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.570595   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:08.570605   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:08.570619   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:08.639672   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:08.639691   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:08.651327   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:08.651345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:08.704679   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:08.698150    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.698634    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700211    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700630    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.702066    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:08.698150    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.698634    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700211    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700630    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.702066    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:08.704698   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:08.704710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:08.767857   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:08.767876   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:11.297723   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:11.307921   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:11.307963   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:11.337544   39074 cri.go:89] found id: ""
	I1002 20:20:11.337560   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.337577   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:11.337584   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:11.337640   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:11.363291   39074 cri.go:89] found id: ""
	I1002 20:20:11.363306   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.363315   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:11.363325   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:11.363366   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:11.387886   39074 cri.go:89] found id: ""
	I1002 20:20:11.387905   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.387915   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:11.387922   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:11.387972   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:11.412550   39074 cri.go:89] found id: ""
	I1002 20:20:11.412565   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.412573   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:11.412579   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:11.412677   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:11.437380   39074 cri.go:89] found id: ""
	I1002 20:20:11.437396   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.437405   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:11.437411   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:11.437452   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:11.461402   39074 cri.go:89] found id: ""
	I1002 20:20:11.461415   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.461421   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:11.461426   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:11.461471   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:11.486814   39074 cri.go:89] found id: ""
	I1002 20:20:11.486828   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.486833   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:11.486840   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:11.486848   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:11.497776   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:11.497791   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:11.552252   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:11.545574    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.546102    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.547700    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.548151    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.549707    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:11.545574    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.546102    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.547700    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.548151    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.549707    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:11.552263   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:11.552278   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:11.614501   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:11.614519   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:11.641975   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:11.641990   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:14.212363   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:14.223339   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:14.223387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:14.247765   39074 cri.go:89] found id: ""
	I1002 20:20:14.247782   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.247790   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:14.247796   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:14.247850   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:14.272207   39074 cri.go:89] found id: ""
	I1002 20:20:14.272223   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.272230   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:14.272235   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:14.272275   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:14.296884   39074 cri.go:89] found id: ""
	I1002 20:20:14.296896   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.296901   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:14.296906   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:14.296953   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:14.322400   39074 cri.go:89] found id: ""
	I1002 20:20:14.322416   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.322424   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:14.322430   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:14.322483   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:14.348457   39074 cri.go:89] found id: ""
	I1002 20:20:14.348474   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.348482   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:14.348488   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:14.348529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:14.371846   39074 cri.go:89] found id: ""
	I1002 20:20:14.371859   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.371866   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:14.371870   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:14.371910   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:14.396739   39074 cri.go:89] found id: ""
	I1002 20:20:14.396757   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.396765   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:14.396775   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:14.396785   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:14.461682   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:14.461703   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:14.473125   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:14.473138   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:14.527220   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:14.520100    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.520639    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522150    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522547    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.524758    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:14.520100    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.520639    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522150    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522547    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.524758    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:14.527230   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:14.527243   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:14.587080   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:14.587097   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:17.117171   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:17.127800   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:17.127860   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:17.153825   39074 cri.go:89] found id: ""
	I1002 20:20:17.153838   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.153845   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:17.153850   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:17.153890   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:17.179191   39074 cri.go:89] found id: ""
	I1002 20:20:17.179208   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.179218   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:17.179225   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:17.179283   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:17.203643   39074 cri.go:89] found id: ""
	I1002 20:20:17.203670   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.203677   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:17.203682   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:17.203729   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:17.228485   39074 cri.go:89] found id: ""
	I1002 20:20:17.228500   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.228509   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:17.228513   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:17.228552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:17.254499   39074 cri.go:89] found id: ""
	I1002 20:20:17.254513   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.254519   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:17.254524   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:17.254568   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:17.280943   39074 cri.go:89] found id: ""
	I1002 20:20:17.280959   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.280968   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:17.280975   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:17.281022   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:17.306591   39074 cri.go:89] found id: ""
	I1002 20:20:17.306607   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.306615   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:17.306624   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:17.306638   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:17.365595   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:17.358275    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359542    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359993    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.361559    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.362067    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:17.358275    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359542    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359993    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.361559    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.362067    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:17.365605   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:17.365615   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:17.428722   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:17.428741   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:17.456720   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:17.456736   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:17.526400   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:17.526419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:20.038675   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:20.049608   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:20.049670   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:20.075162   39074 cri.go:89] found id: ""
	I1002 20:20:20.075178   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.075193   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:20.075200   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:20.075244   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:20.100714   39074 cri.go:89] found id: ""
	I1002 20:20:20.100730   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.100739   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:20.100745   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:20.100796   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:20.125515   39074 cri.go:89] found id: ""
	I1002 20:20:20.125530   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.125536   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:20.125541   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:20.125590   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:20.150152   39074 cri.go:89] found id: ""
	I1002 20:20:20.150166   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.150172   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:20.150176   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:20.150219   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:20.174386   39074 cri.go:89] found id: ""
	I1002 20:20:20.174400   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.174405   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:20.174410   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:20.174451   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:20.198954   39074 cri.go:89] found id: ""
	I1002 20:20:20.198967   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.198974   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:20.198978   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:20.199019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:20.223494   39074 cri.go:89] found id: ""
	I1002 20:20:20.223506   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.223512   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:20.223520   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:20.223530   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:20.234227   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:20.234242   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:20.287508   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:20.281135    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.281556    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283225    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283624    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.285109    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:20.281135    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.281556    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283225    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283624    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.285109    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:20.287521   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:20.287530   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:20.353299   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:20.353316   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:20.381247   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:20.381264   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:22.948641   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:22.958867   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:22.958923   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:22.982867   39074 cri.go:89] found id: ""
	I1002 20:20:22.982888   39074 logs.go:282] 0 containers: []
	W1002 20:20:22.982896   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:22.982905   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:22.982963   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:23.008002   39074 cri.go:89] found id: ""
	I1002 20:20:23.008019   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.008025   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:23.008031   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:23.008102   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:23.032729   39074 cri.go:89] found id: ""
	I1002 20:20:23.032745   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.032755   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:23.032761   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:23.032805   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:23.057489   39074 cri.go:89] found id: ""
	I1002 20:20:23.057506   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.057513   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:23.057520   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:23.057574   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:23.082449   39074 cri.go:89] found id: ""
	I1002 20:20:23.082465   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.082473   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:23.082480   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:23.082533   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:23.106284   39074 cri.go:89] found id: ""
	I1002 20:20:23.106300   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.106308   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:23.106314   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:23.106356   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:23.131674   39074 cri.go:89] found id: ""
	I1002 20:20:23.131689   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.131698   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:23.131708   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:23.131719   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:23.202584   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:23.202606   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:23.213553   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:23.213567   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:23.267093   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:23.260296    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.260752    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262302    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262721    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.264215    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:23.260296    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.260752    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262302    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262721    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.264215    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:23.267107   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:23.267117   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:23.330039   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:23.330057   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:25.859757   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:25.870050   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:25.870094   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:25.893890   39074 cri.go:89] found id: ""
	I1002 20:20:25.893903   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.893909   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:25.893913   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:25.893962   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:25.918711   39074 cri.go:89] found id: ""
	I1002 20:20:25.918724   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.918731   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:25.918740   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:25.918790   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:25.943028   39074 cri.go:89] found id: ""
	I1002 20:20:25.943040   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.943046   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:25.943050   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:25.943100   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:25.968555   39074 cri.go:89] found id: ""
	I1002 20:20:25.968569   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.968575   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:25.968580   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:25.968630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:25.993321   39074 cri.go:89] found id: ""
	I1002 20:20:25.993334   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.993340   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:25.993344   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:25.993393   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:26.017729   39074 cri.go:89] found id: ""
	I1002 20:20:26.017755   39074 logs.go:282] 0 containers: []
	W1002 20:20:26.017761   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:26.017766   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:26.017807   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:26.042867   39074 cri.go:89] found id: ""
	I1002 20:20:26.042879   39074 logs.go:282] 0 containers: []
	W1002 20:20:26.042885   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:26.042892   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:26.042900   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:26.109498   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:26.109517   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:26.120700   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:26.120715   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:26.174158   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:26.167675    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.168158    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.169684    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.170006    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.171555    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:26.167675    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.168158    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.169684    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.170006    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.171555    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:26.174169   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:26.174177   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:26.232801   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:26.232820   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:28.760440   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:28.770974   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:28.771015   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:28.795071   39074 cri.go:89] found id: ""
	I1002 20:20:28.795084   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.795089   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:28.795094   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:28.795137   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:28.820101   39074 cri.go:89] found id: ""
	I1002 20:20:28.820114   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.820120   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:28.820125   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:28.820174   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:28.844954   39074 cri.go:89] found id: ""
	I1002 20:20:28.844967   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.844974   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:28.844978   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:28.845021   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:28.869971   39074 cri.go:89] found id: ""
	I1002 20:20:28.869984   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.869991   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:28.869996   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:28.870035   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:28.894419   39074 cri.go:89] found id: ""
	I1002 20:20:28.894434   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.894443   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:28.894454   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:28.894497   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:28.919785   39074 cri.go:89] found id: ""
	I1002 20:20:28.919798   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.919804   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:28.919808   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:28.919847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:28.945626   39074 cri.go:89] found id: ""
	I1002 20:20:28.945644   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.945666   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:28.945676   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:28.945688   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:29.013406   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:29.013424   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:29.024733   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:29.024749   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:29.079492   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:29.073004    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.073547    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075195    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075620    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.077061    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:29.073004    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.073547    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075195    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075620    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.077061    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:29.079501   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:29.079510   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:29.143375   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:29.143393   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:31.673342   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:31.683685   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:31.683744   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:31.708355   39074 cri.go:89] found id: ""
	I1002 20:20:31.708368   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.708374   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:31.708378   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:31.708426   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:31.732066   39074 cri.go:89] found id: ""
	I1002 20:20:31.732080   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.732085   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:31.732090   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:31.732128   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:31.756955   39074 cri.go:89] found id: ""
	I1002 20:20:31.756968   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.756975   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:31.756981   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:31.757031   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:31.783141   39074 cri.go:89] found id: ""
	I1002 20:20:31.783157   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.783163   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:31.783168   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:31.783209   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:31.807678   39074 cri.go:89] found id: ""
	I1002 20:20:31.807691   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.807698   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:31.807703   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:31.807745   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:31.831482   39074 cri.go:89] found id: ""
	I1002 20:20:31.831494   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.831500   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:31.831504   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:31.831548   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:31.855667   39074 cri.go:89] found id: ""
	I1002 20:20:31.855683   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.855692   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:31.855700   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:31.855710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:31.882380   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:31.882395   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:31.947814   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:31.947838   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:31.958919   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:31.958934   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:32.013721   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:32.006971    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.007473    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009037    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009432    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.010967    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:32.006971    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.007473    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009037    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009432    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.010967    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:32.013731   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:32.013742   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:34.575751   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:34.585980   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:34.586030   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:34.610997   39074 cri.go:89] found id: ""
	I1002 20:20:34.611013   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.611019   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:34.611024   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:34.611076   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:34.635375   39074 cri.go:89] found id: ""
	I1002 20:20:34.635388   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.635394   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:34.635401   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:34.635449   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:34.659513   39074 cri.go:89] found id: ""
	I1002 20:20:34.659526   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.659532   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:34.659536   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:34.659584   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:34.683614   39074 cri.go:89] found id: ""
	I1002 20:20:34.683628   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.683634   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:34.683638   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:34.683709   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:34.707536   39074 cri.go:89] found id: ""
	I1002 20:20:34.707548   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.707554   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:34.707558   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:34.707606   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:34.730813   39074 cri.go:89] found id: ""
	I1002 20:20:34.730829   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.730838   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:34.730844   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:34.730886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:34.756746   39074 cri.go:89] found id: ""
	I1002 20:20:34.756758   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.756763   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:34.756770   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:34.756779   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:34.823845   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:34.823864   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:34.834944   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:34.834959   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:34.889016   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:34.882235    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.882739    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884456    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884966    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.886550    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:34.882235    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.882739    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884456    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884966    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.886550    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:34.889027   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:34.889039   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:34.952102   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:34.952120   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:37.482142   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:37.492739   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:37.492783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:37.518265   39074 cri.go:89] found id: ""
	I1002 20:20:37.518279   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.518285   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:37.518290   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:37.518332   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:37.544309   39074 cri.go:89] found id: ""
	I1002 20:20:37.544322   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.544327   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:37.544332   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:37.544371   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:37.568928   39074 cri.go:89] found id: ""
	I1002 20:20:37.568947   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.568955   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:37.568960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:37.569000   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:37.593112   39074 cri.go:89] found id: ""
	I1002 20:20:37.593125   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.593131   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:37.593135   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:37.593175   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:37.617378   39074 cri.go:89] found id: ""
	I1002 20:20:37.617393   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.617399   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:37.617404   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:37.617446   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:37.641497   39074 cri.go:89] found id: ""
	I1002 20:20:37.641509   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.641514   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:37.641519   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:37.641560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:37.665025   39074 cri.go:89] found id: ""
	I1002 20:20:37.665037   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.665043   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:37.665050   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:37.665059   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:37.729867   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:37.729886   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:37.741144   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:37.741161   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:37.794545   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:37.788104    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.788618    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790124    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790578    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.791673    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:37.788104    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.788618    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790124    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790578    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.791673    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:37.794554   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:37.794563   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:37.858517   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:37.858537   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:40.387221   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:40.397406   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:40.397456   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:40.422226   39074 cri.go:89] found id: ""
	I1002 20:20:40.422241   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.422249   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:40.422256   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:40.422312   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:40.448898   39074 cri.go:89] found id: ""
	I1002 20:20:40.448914   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.448922   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:40.448928   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:40.448970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:40.473866   39074 cri.go:89] found id: ""
	I1002 20:20:40.473883   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.473891   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:40.473898   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:40.473940   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:40.499789   39074 cri.go:89] found id: ""
	I1002 20:20:40.499804   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.499820   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:40.499827   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:40.499870   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:40.524055   39074 cri.go:89] found id: ""
	I1002 20:20:40.524070   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.524078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:40.524084   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:40.524131   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:40.549681   39074 cri.go:89] found id: ""
	I1002 20:20:40.549697   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.549705   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:40.549709   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:40.549751   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:40.574534   39074 cri.go:89] found id: ""
	I1002 20:20:40.574551   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.574559   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:40.574568   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:40.574585   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:40.585332   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:40.585345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:40.639552   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:40.632883    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.633368    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.634904    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.635294    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.636825    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:40.632883    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.633368    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.634904    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.635294    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.636825    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:40.639561   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:40.639570   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:40.703074   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:40.703093   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:40.731458   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:40.731471   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:43.302779   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:43.313194   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:43.313249   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:43.340348   39074 cri.go:89] found id: ""
	I1002 20:20:43.340361   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.340367   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:43.340372   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:43.340416   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:43.365438   39074 cri.go:89] found id: ""
	I1002 20:20:43.365453   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.365461   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:43.365467   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:43.365530   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:43.392295   39074 cri.go:89] found id: ""
	I1002 20:20:43.392308   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.392314   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:43.392319   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:43.392358   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:43.417313   39074 cri.go:89] found id: ""
	I1002 20:20:43.417326   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.417332   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:43.417336   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:43.417381   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:43.441890   39074 cri.go:89] found id: ""
	I1002 20:20:43.441907   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.441913   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:43.441917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:43.441959   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:43.467410   39074 cri.go:89] found id: ""
	I1002 20:20:43.467427   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.467438   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:43.467444   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:43.467501   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:43.492142   39074 cri.go:89] found id: ""
	I1002 20:20:43.492154   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.492160   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:43.492168   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:43.492178   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:43.520876   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:43.520907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:43.586242   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:43.586258   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:43.597341   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:43.597355   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:43.651087   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:43.644558    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.645043    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.646588    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.647003    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.648460    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:43.644558    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.645043    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.646588    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.647003    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.648460    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:43.651098   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:43.651112   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:46.210362   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:46.220658   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:46.220710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:46.245577   39074 cri.go:89] found id: ""
	I1002 20:20:46.245591   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.245597   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:46.245601   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:46.245641   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:46.270950   39074 cri.go:89] found id: ""
	I1002 20:20:46.270965   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.270974   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:46.270979   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:46.271024   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:46.295887   39074 cri.go:89] found id: ""
	I1002 20:20:46.295903   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.295911   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:46.295917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:46.295969   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:46.321705   39074 cri.go:89] found id: ""
	I1002 20:20:46.321721   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.321730   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:46.321736   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:46.321785   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:46.348811   39074 cri.go:89] found id: ""
	I1002 20:20:46.348827   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.348836   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:46.348842   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:46.348900   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:46.373477   39074 cri.go:89] found id: ""
	I1002 20:20:46.373493   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.373502   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:46.373508   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:46.373552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:46.398884   39074 cri.go:89] found id: ""
	I1002 20:20:46.398900   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.398908   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:46.398917   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:46.398926   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:46.463113   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:46.463131   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:46.474566   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:46.474578   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:46.529468   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:46.522633    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.523203    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.524813    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.525199    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.526736    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:46.522633    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.523203    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.524813    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.525199    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.526736    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:46.529479   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:46.529489   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:46.590223   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:46.590241   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:49.118745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:49.128971   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:49.129012   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:49.155632   39074 cri.go:89] found id: ""
	I1002 20:20:49.155662   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.155683   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:49.155689   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:49.155734   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:49.180611   39074 cri.go:89] found id: ""
	I1002 20:20:49.180629   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.180635   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:49.180639   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:49.180703   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:49.206534   39074 cri.go:89] found id: ""
	I1002 20:20:49.206557   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.206563   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:49.206568   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:49.206617   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:49.231608   39074 cri.go:89] found id: ""
	I1002 20:20:49.231625   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.231633   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:49.231641   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:49.231713   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:49.256407   39074 cri.go:89] found id: ""
	I1002 20:20:49.256426   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.256433   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:49.256439   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:49.256490   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:49.281494   39074 cri.go:89] found id: ""
	I1002 20:20:49.281509   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.281517   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:49.281524   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:49.281571   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:49.306502   39074 cri.go:89] found id: ""
	I1002 20:20:49.306518   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.306526   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:49.306534   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:49.306543   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:49.374386   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:49.374408   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:49.385910   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:49.385928   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:49.440525   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:49.433626    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.434180    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.435811    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.436224    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.437741    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:49.433626    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.434180    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.435811    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.436224    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.437741    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:49.440537   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:49.440549   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:49.501317   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:49.501334   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:52.031253   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:52.041701   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:52.041754   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:52.066302   39074 cri.go:89] found id: ""
	I1002 20:20:52.066315   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.066321   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:52.066325   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:52.066375   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:52.091575   39074 cri.go:89] found id: ""
	I1002 20:20:52.091591   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.091600   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:52.091606   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:52.091674   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:52.115838   39074 cri.go:89] found id: ""
	I1002 20:20:52.115854   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.115861   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:52.115867   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:52.115914   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:52.141387   39074 cri.go:89] found id: ""
	I1002 20:20:52.141402   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.141412   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:52.141417   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:52.141460   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:52.166810   39074 cri.go:89] found id: ""
	I1002 20:20:52.166823   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.166828   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:52.166832   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:52.166872   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:52.192399   39074 cri.go:89] found id: ""
	I1002 20:20:52.192413   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.192420   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:52.192425   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:52.192473   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:52.217364   39074 cri.go:89] found id: ""
	I1002 20:20:52.217378   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.217385   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:52.217391   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:52.217401   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:52.272135   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:52.265457    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.266093    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.267566    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.268058    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.269531    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:52.265457    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.266093    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.267566    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.268058    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.269531    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:52.272144   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:52.272152   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:52.334330   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:52.334352   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:52.364500   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:52.364514   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:52.427683   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:52.427702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:54.939454   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:54.950121   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:54.950174   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:54.975667   39074 cri.go:89] found id: ""
	I1002 20:20:54.975683   39074 logs.go:282] 0 containers: []
	W1002 20:20:54.975692   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:54.975697   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:54.975739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:55.000676   39074 cri.go:89] found id: ""
	I1002 20:20:55.000692   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.000702   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:55.000711   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:55.000772   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:55.025484   39074 cri.go:89] found id: ""
	I1002 20:20:55.025499   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.025509   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:55.025516   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:55.025570   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:55.050548   39074 cri.go:89] found id: ""
	I1002 20:20:55.050562   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.050570   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:55.050576   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:55.050623   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:55.075593   39074 cri.go:89] found id: ""
	I1002 20:20:55.075608   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.075613   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:55.075618   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:55.075683   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:55.100182   39074 cri.go:89] found id: ""
	I1002 20:20:55.100196   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.100202   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:55.100206   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:55.100245   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:55.125869   39074 cri.go:89] found id: ""
	I1002 20:20:55.125883   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.125890   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:55.125898   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:55.125907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:55.194871   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:55.194894   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:55.206048   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:55.206063   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:55.259703   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:55.253143    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.253642    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255145    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255538    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.257050    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:55.253143    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.253642    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255145    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255538    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.257050    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:55.259714   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:55.259723   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:55.319375   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:55.319393   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:57.847993   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:57.858498   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:57.858550   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:57.881390   39074 cri.go:89] found id: ""
	I1002 20:20:57.881404   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.881412   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:57.881416   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:57.881460   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:57.905251   39074 cri.go:89] found id: ""
	I1002 20:20:57.905267   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.905274   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:57.905279   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:57.905318   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:57.931213   39074 cri.go:89] found id: ""
	I1002 20:20:57.931226   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.931233   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:57.931238   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:57.931280   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:57.954527   39074 cri.go:89] found id: ""
	I1002 20:20:57.954544   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.954558   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:57.954564   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:57.954604   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:57.978788   39074 cri.go:89] found id: ""
	I1002 20:20:57.978801   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.978807   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:57.978811   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:57.978861   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:58.004052   39074 cri.go:89] found id: ""
	I1002 20:20:58.004067   39074 logs.go:282] 0 containers: []
	W1002 20:20:58.004075   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:58.004082   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:58.004123   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:58.028322   39074 cri.go:89] found id: ""
	I1002 20:20:58.028335   39074 logs.go:282] 0 containers: []
	W1002 20:20:58.028341   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:58.028348   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:58.028357   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:58.094257   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:58.094275   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:58.105903   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:58.105918   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:58.160072   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:58.153230   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.153795   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155325   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155732   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.157257   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:58.153230   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.153795   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155325   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155732   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.157257   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:58.160081   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:58.160090   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:58.219413   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:58.219430   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:00.748760   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:00.759397   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:00.759452   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:00.783722   39074 cri.go:89] found id: ""
	I1002 20:21:00.783738   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.783747   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:00.783755   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:00.783811   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:00.808536   39074 cri.go:89] found id: ""
	I1002 20:21:00.808552   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.808560   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:00.808565   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:00.808619   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:00.833822   39074 cri.go:89] found id: ""
	I1002 20:21:00.833839   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.833846   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:00.833850   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:00.833893   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:00.857297   39074 cri.go:89] found id: ""
	I1002 20:21:00.857311   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.857317   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:00.857322   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:00.857372   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:00.882563   39074 cri.go:89] found id: ""
	I1002 20:21:00.882578   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.882586   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:00.882592   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:00.882664   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:00.907673   39074 cri.go:89] found id: ""
	I1002 20:21:00.907689   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.907698   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:00.907704   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:00.907746   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:00.932133   39074 cri.go:89] found id: ""
	I1002 20:21:00.932148   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.932156   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:00.932165   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:00.932179   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:01.000177   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:01.000198   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:01.012252   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:01.012267   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:01.068351   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:01.061526   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.062112   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.063638   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.064089   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.065590   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:01.061526   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.062112   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.063638   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.064089   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.065590   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:01.068361   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:01.068370   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:01.128987   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:01.129007   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:03.659911   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:03.670393   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:03.670439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:03.695784   39074 cri.go:89] found id: ""
	I1002 20:21:03.695796   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.695802   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:03.695806   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:03.695846   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:03.720085   39074 cri.go:89] found id: ""
	I1002 20:21:03.720098   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.720104   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:03.720109   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:03.720150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:03.745925   39074 cri.go:89] found id: ""
	I1002 20:21:03.745940   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.745950   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:03.745958   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:03.745996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:03.770616   39074 cri.go:89] found id: ""
	I1002 20:21:03.770632   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.770639   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:03.770655   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:03.770711   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:03.793953   39074 cri.go:89] found id: ""
	I1002 20:21:03.793969   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.793977   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:03.793982   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:03.794028   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:03.818909   39074 cri.go:89] found id: ""
	I1002 20:21:03.818925   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.818933   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:03.818940   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:03.818996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:03.843200   39074 cri.go:89] found id: ""
	I1002 20:21:03.843213   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.843219   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:03.843228   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:03.843237   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:03.901520   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:03.901537   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:03.929305   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:03.929319   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:03.993117   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:03.993134   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:04.004664   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:04.004678   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:04.058624   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:04.051963   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.052457   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.053947   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.054366   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.055857   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:04.051963   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.052457   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.053947   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.054366   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.055857   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:06.560322   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:06.570866   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:06.570909   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:06.594524   39074 cri.go:89] found id: ""
	I1002 20:21:06.594536   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.594542   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:06.594547   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:06.594586   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:06.619717   39074 cri.go:89] found id: ""
	I1002 20:21:06.619730   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.619741   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:06.619747   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:06.619787   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:06.643975   39074 cri.go:89] found id: ""
	I1002 20:21:06.643989   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.643994   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:06.643999   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:06.644051   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:06.667642   39074 cri.go:89] found id: ""
	I1002 20:21:06.667674   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.667683   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:06.667690   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:06.667735   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:06.692383   39074 cri.go:89] found id: ""
	I1002 20:21:06.692398   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.692406   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:06.692411   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:06.692459   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:06.716132   39074 cri.go:89] found id: ""
	I1002 20:21:06.716148   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.716157   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:06.716162   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:06.716206   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:06.740781   39074 cri.go:89] found id: ""
	I1002 20:21:06.740794   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.740800   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:06.740809   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:06.740817   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:06.809048   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:06.809064   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:06.820121   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:06.820134   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:06.873477   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:06.866935   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.867506   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869037   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869480   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.870947   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:06.866935   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.867506   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869037   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869480   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.870947   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:06.873489   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:06.873503   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:06.932869   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:06.932885   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:09.461200   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:09.471453   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:09.471494   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:09.495052   39074 cri.go:89] found id: ""
	I1002 20:21:09.495076   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.495083   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:09.495090   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:09.495142   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:09.520680   39074 cri.go:89] found id: ""
	I1002 20:21:09.520694   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.520699   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:09.520704   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:09.520745   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:09.544279   39074 cri.go:89] found id: ""
	I1002 20:21:09.544292   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.544300   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:09.544305   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:09.544343   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:09.568552   39074 cri.go:89] found id: ""
	I1002 20:21:09.568564   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.568570   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:09.568575   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:09.568636   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:09.593483   39074 cri.go:89] found id: ""
	I1002 20:21:09.593496   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.593504   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:09.593509   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:09.593548   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:09.618504   39074 cri.go:89] found id: ""
	I1002 20:21:09.618518   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.618524   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:09.618529   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:09.618568   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:09.644028   39074 cri.go:89] found id: ""
	I1002 20:21:09.644040   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.644046   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:09.644054   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:09.644068   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:09.709968   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:09.709989   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:09.721282   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:09.721295   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:09.774963   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:09.768383   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.768943   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770534   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770976   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.772525   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:09.768383   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.768943   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770534   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770976   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.772525   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:09.774974   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:09.774985   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:09.833762   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:09.833780   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:12.362468   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:12.372596   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:12.372637   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:12.398178   39074 cri.go:89] found id: ""
	I1002 20:21:12.398193   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.398202   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:12.398208   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:12.398255   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:12.422734   39074 cri.go:89] found id: ""
	I1002 20:21:12.422751   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.422759   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:12.422764   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:12.422806   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:12.446773   39074 cri.go:89] found id: ""
	I1002 20:21:12.446791   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.446799   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:12.446806   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:12.446847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:12.470795   39074 cri.go:89] found id: ""
	I1002 20:21:12.470808   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.470815   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:12.470819   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:12.470858   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:12.494783   39074 cri.go:89] found id: ""
	I1002 20:21:12.494796   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.494801   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:12.494805   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:12.494845   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:12.518163   39074 cri.go:89] found id: ""
	I1002 20:21:12.518177   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.518182   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:12.518187   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:12.518226   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:12.542626   39074 cri.go:89] found id: ""
	I1002 20:21:12.542638   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.542643   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:12.542663   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:12.542679   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:12.553111   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:12.553122   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:12.607093   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:12.600525   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.601040   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602535   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602952   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.604425   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:12.600525   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.601040   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602535   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602952   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.604425   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:12.607103   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:12.607112   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:12.666819   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:12.666837   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:12.694057   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:12.694071   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:15.261212   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:15.271321   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:15.271362   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:15.296775   39074 cri.go:89] found id: ""
	I1002 20:21:15.296788   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.296795   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:15.296799   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:15.296841   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:15.320931   39074 cri.go:89] found id: ""
	I1002 20:21:15.320944   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.320950   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:15.320954   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:15.320996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:15.344685   39074 cri.go:89] found id: ""
	I1002 20:21:15.344698   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.344704   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:15.344709   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:15.344748   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:15.368513   39074 cri.go:89] found id: ""
	I1002 20:21:15.368527   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.368534   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:15.368538   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:15.368605   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:15.392399   39074 cri.go:89] found id: ""
	I1002 20:21:15.392414   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.392422   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:15.392428   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:15.392486   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:15.416043   39074 cri.go:89] found id: ""
	I1002 20:21:15.416056   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.416062   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:15.416066   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:15.416110   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:15.440250   39074 cri.go:89] found id: ""
	I1002 20:21:15.440263   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.440269   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:15.440276   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:15.440285   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:15.467533   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:15.467548   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:15.533766   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:15.533790   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:15.544835   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:15.544851   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:15.599678   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:15.592798   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.593349   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.594871   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.595307   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.596919   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:15.592798   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.593349   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.594871   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.595307   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.596919   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:15.599691   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:15.599702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:18.165132   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:18.175676   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:18.175725   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:18.199922   39074 cri.go:89] found id: ""
	I1002 20:21:18.199940   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.199946   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:18.199951   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:18.199992   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:18.223152   39074 cri.go:89] found id: ""
	I1002 20:21:18.223169   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.223177   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:18.223184   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:18.223227   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:18.246742   39074 cri.go:89] found id: ""
	I1002 20:21:18.246757   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.246766   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:18.246772   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:18.246816   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:18.270031   39074 cri.go:89] found id: ""
	I1002 20:21:18.270044   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.270050   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:18.270055   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:18.270106   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:18.294199   39074 cri.go:89] found id: ""
	I1002 20:21:18.294213   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.294220   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:18.294224   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:18.294265   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:18.319955   39074 cri.go:89] found id: ""
	I1002 20:21:18.319968   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.319974   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:18.319979   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:18.320027   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:18.346187   39074 cri.go:89] found id: ""
	I1002 20:21:18.346202   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.346209   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:18.346218   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:18.346230   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:18.412451   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:18.412469   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:18.423898   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:18.423911   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:18.477273   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:18.470574   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.471135   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.472841   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.473326   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.474859   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:18.470574   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.471135   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.472841   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.473326   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.474859   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:18.477287   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:18.477297   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:18.536355   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:18.536373   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:21.066419   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:21.076563   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:21.076666   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:21.102164   39074 cri.go:89] found id: ""
	I1002 20:21:21.102177   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.102183   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:21.102188   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:21.102232   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:21.129158   39074 cri.go:89] found id: ""
	I1002 20:21:21.129173   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.129182   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:21.129188   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:21.129231   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:21.154477   39074 cri.go:89] found id: ""
	I1002 20:21:21.154492   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.154497   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:21.154502   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:21.154546   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:21.180534   39074 cri.go:89] found id: ""
	I1002 20:21:21.180549   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.180555   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:21.180561   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:21.180620   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:21.206019   39074 cri.go:89] found id: ""
	I1002 20:21:21.206031   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.206038   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:21.206046   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:21.206084   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:21.230114   39074 cri.go:89] found id: ""
	I1002 20:21:21.230127   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.230133   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:21.230138   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:21.230178   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:21.254824   39074 cri.go:89] found id: ""
	I1002 20:21:21.254838   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.254844   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:21.254851   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:21.254860   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:21.317018   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:21.317035   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:21.343844   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:21.343858   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:21.408925   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:21.408944   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:21.419821   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:21.419835   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:21.471978   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:21.465582   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.466082   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467579   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467942   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.469419   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:21.465582   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.466082   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467579   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467942   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.469419   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:23.973621   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:23.984622   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:23.984691   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:24.008789   39074 cri.go:89] found id: ""
	I1002 20:21:24.008805   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.008814   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:24.008820   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:24.008867   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:24.034564   39074 cri.go:89] found id: ""
	I1002 20:21:24.034581   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.034596   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:24.034603   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:24.034643   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:24.059176   39074 cri.go:89] found id: ""
	I1002 20:21:24.059189   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.059194   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:24.059199   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:24.059247   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:24.083475   39074 cri.go:89] found id: ""
	I1002 20:21:24.083488   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.083495   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:24.083499   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:24.083550   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:24.108059   39074 cri.go:89] found id: ""
	I1002 20:21:24.108072   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.108078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:24.108083   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:24.108124   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:24.132959   39074 cri.go:89] found id: ""
	I1002 20:21:24.132973   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.132978   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:24.132983   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:24.133023   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:24.157626   39074 cri.go:89] found id: ""
	I1002 20:21:24.157638   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.157644   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:24.157666   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:24.157677   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:24.222240   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:24.222258   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:24.252463   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:24.252477   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:24.322663   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:24.322681   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:24.334105   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:24.334119   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:24.388449   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:24.381839   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.382360   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.383974   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.384421   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.385999   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:24.381839   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.382360   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.383974   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.384421   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.385999   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:26.890112   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:26.900667   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:26.900710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:26.924781   39074 cri.go:89] found id: ""
	I1002 20:21:26.924794   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.924800   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:26.924805   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:26.924846   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:26.948571   39074 cri.go:89] found id: ""
	I1002 20:21:26.948586   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.948600   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:26.948606   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:26.948661   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:26.972451   39074 cri.go:89] found id: ""
	I1002 20:21:26.972466   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.972472   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:26.972478   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:26.972525   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:26.997499   39074 cri.go:89] found id: ""
	I1002 20:21:26.997512   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.997518   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:26.997523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:26.997572   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:27.022056   39074 cri.go:89] found id: ""
	I1002 20:21:27.022072   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.022078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:27.022083   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:27.022124   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:27.046069   39074 cri.go:89] found id: ""
	I1002 20:21:27.046083   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.046089   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:27.046095   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:27.046135   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:27.070455   39074 cri.go:89] found id: ""
	I1002 20:21:27.070469   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.070475   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:27.070482   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:27.070493   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:27.139300   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:27.139317   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:27.150073   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:27.150086   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:27.203171   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:27.196472   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.196973   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198530   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198931   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.200409   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:27.196472   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.196973   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198530   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198931   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.200409   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:27.203181   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:27.203189   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:27.265474   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:27.265492   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:29.793992   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:29.804235   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:29.804279   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:29.828729   39074 cri.go:89] found id: ""
	I1002 20:21:29.828743   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.828751   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:29.828757   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:29.828809   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:29.853355   39074 cri.go:89] found id: ""
	I1002 20:21:29.853372   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.853382   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:29.853388   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:29.853439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:29.878218   39074 cri.go:89] found id: ""
	I1002 20:21:29.878231   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.878236   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:29.878241   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:29.878281   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:29.903091   39074 cri.go:89] found id: ""
	I1002 20:21:29.903105   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.903114   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:29.903120   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:29.903161   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:29.927692   39074 cri.go:89] found id: ""
	I1002 20:21:29.927710   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.927716   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:29.927720   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:29.927769   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:29.952593   39074 cri.go:89] found id: ""
	I1002 20:21:29.952608   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.952618   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:29.952624   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:29.952693   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:29.977117   39074 cri.go:89] found id: ""
	I1002 20:21:29.977133   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.977140   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:29.977150   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:29.977161   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:30.004687   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:30.004701   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:30.071166   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:30.071188   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:30.082387   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:30.082403   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:30.137131   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:30.130268   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.130846   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132362   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132758   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.134348   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:30.130268   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.130846   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132362   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132758   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.134348   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:30.137140   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:30.137148   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:32.698009   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:32.708134   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:32.708177   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:32.734103   39074 cri.go:89] found id: ""
	I1002 20:21:32.734117   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.734126   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:32.734131   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:32.734179   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:32.758404   39074 cri.go:89] found id: ""
	I1002 20:21:32.758417   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.758423   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:32.758431   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:32.758477   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:32.784135   39074 cri.go:89] found id: ""
	I1002 20:21:32.784150   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.784157   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:32.784161   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:32.784204   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:32.809641   39074 cri.go:89] found id: ""
	I1002 20:21:32.809684   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.809693   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:32.809697   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:32.809739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:32.833831   39074 cri.go:89] found id: ""
	I1002 20:21:32.833847   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.833856   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:32.833862   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:32.833918   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:32.858510   39074 cri.go:89] found id: ""
	I1002 20:21:32.858523   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.858531   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:32.858537   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:32.858590   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:32.882883   39074 cri.go:89] found id: ""
	I1002 20:21:32.882898   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.882907   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:32.882916   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:32.882928   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:32.951104   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:32.951125   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:32.962042   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:32.962058   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:33.015746   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:33.009215   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.009701   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011251   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011629   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.013187   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:33.009215   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.009701   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011251   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011629   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.013187   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:33.015758   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:33.015772   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:33.074804   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:33.074821   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:35.603185   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:35.613834   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:35.613876   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:35.638330   39074 cri.go:89] found id: ""
	I1002 20:21:35.638342   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.638348   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:35.638353   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:35.638391   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:35.661464   39074 cri.go:89] found id: ""
	I1002 20:21:35.661476   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.661482   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:35.661487   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:35.661529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:35.684962   39074 cri.go:89] found id: ""
	I1002 20:21:35.684977   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.684983   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:35.684987   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:35.685036   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:35.708990   39074 cri.go:89] found id: ""
	I1002 20:21:35.709002   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.709007   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:35.709012   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:35.709054   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:35.732099   39074 cri.go:89] found id: ""
	I1002 20:21:35.732116   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.732125   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:35.732134   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:35.732179   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:35.756437   39074 cri.go:89] found id: ""
	I1002 20:21:35.756450   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.756456   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:35.756461   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:35.756501   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:35.782205   39074 cri.go:89] found id: ""
	I1002 20:21:35.782219   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.782225   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:35.782231   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:35.782240   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:35.849923   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:35.849941   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:35.861090   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:35.861104   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:35.914924   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:35.908496   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.908987   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.910547   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.911018   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.912498   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:35.908496   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.908987   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.910547   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.911018   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.912498   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:35.914934   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:35.914943   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:35.975011   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:35.975031   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:38.503369   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:38.513583   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:38.513630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:38.538175   39074 cri.go:89] found id: ""
	I1002 20:21:38.538190   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.538197   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:38.538201   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:38.538239   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:38.562421   39074 cri.go:89] found id: ""
	I1002 20:21:38.562434   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.562440   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:38.562444   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:38.562510   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:38.587376   39074 cri.go:89] found id: ""
	I1002 20:21:38.587388   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.587394   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:38.587400   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:38.587439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:38.611178   39074 cri.go:89] found id: ""
	I1002 20:21:38.611192   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.611198   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:38.611202   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:38.611243   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:38.635805   39074 cri.go:89] found id: ""
	I1002 20:21:38.635817   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.635823   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:38.635827   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:38.635872   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:38.660043   39074 cri.go:89] found id: ""
	I1002 20:21:38.660065   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.660071   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:38.660075   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:38.660115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:38.683490   39074 cri.go:89] found id: ""
	I1002 20:21:38.683502   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.683508   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:38.683515   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:38.683522   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:38.741516   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:38.741534   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:38.769294   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:38.769308   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:38.838736   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:38.838753   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:38.849582   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:38.849612   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:38.903424   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:38.896399   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.896943   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898498   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898964   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.900463   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:38.896399   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.896943   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898498   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898964   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.900463   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:41.405089   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:41.415377   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:41.415426   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:41.440687   39074 cri.go:89] found id: ""
	I1002 20:21:41.440700   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.440707   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:41.440712   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:41.440755   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:41.465054   39074 cri.go:89] found id: ""
	I1002 20:21:41.465075   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.465081   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:41.465086   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:41.465140   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:41.489735   39074 cri.go:89] found id: ""
	I1002 20:21:41.489748   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.489754   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:41.489759   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:41.489799   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:41.514723   39074 cri.go:89] found id: ""
	I1002 20:21:41.514735   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.514740   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:41.514745   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:41.514786   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:41.538573   39074 cri.go:89] found id: ""
	I1002 20:21:41.538586   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.538592   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:41.538597   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:41.538669   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:41.563317   39074 cri.go:89] found id: ""
	I1002 20:21:41.563334   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.563343   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:41.563349   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:41.563389   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:41.587493   39074 cri.go:89] found id: ""
	I1002 20:21:41.587509   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.587515   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:41.587522   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:41.587532   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:41.657445   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:41.657473   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:41.668994   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:41.669012   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:41.722898   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:41.715908   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.716372   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718002   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718454   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.720024   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:41.715908   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.716372   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718002   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718454   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.720024   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:41.722911   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:41.722919   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:41.780887   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:41.780909   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:44.310936   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:44.322755   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:44.322807   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:44.347939   39074 cri.go:89] found id: ""
	I1002 20:21:44.347951   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.347958   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:44.347962   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:44.348004   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:44.372444   39074 cri.go:89] found id: ""
	I1002 20:21:44.372460   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.372466   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:44.372472   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:44.372514   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:44.397131   39074 cri.go:89] found id: ""
	I1002 20:21:44.397148   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.397157   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:44.397163   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:44.397215   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:44.421209   39074 cri.go:89] found id: ""
	I1002 20:21:44.421222   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.421228   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:44.421232   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:44.421269   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:44.445113   39074 cri.go:89] found id: ""
	I1002 20:21:44.445125   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.445131   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:44.445135   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:44.445178   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:44.469164   39074 cri.go:89] found id: ""
	I1002 20:21:44.469178   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.469185   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:44.469191   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:44.469248   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:44.494058   39074 cri.go:89] found id: ""
	I1002 20:21:44.494070   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.494076   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:44.494083   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:44.494091   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:44.563166   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:44.563185   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:44.574587   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:44.574601   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:44.627643   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:44.620697   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.621137   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.622679   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.623151   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.624644   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:44.620697   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.621137   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.622679   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.623151   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.624644   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:44.627670   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:44.627681   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:44.688606   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:44.688623   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:47.218714   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:47.229181   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:47.229224   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:47.254586   39074 cri.go:89] found id: ""
	I1002 20:21:47.254600   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.254607   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:47.254611   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:47.254666   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:47.277466   39074 cri.go:89] found id: ""
	I1002 20:21:47.277479   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.277485   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:47.277489   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:47.277529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:47.300741   39074 cri.go:89] found id: ""
	I1002 20:21:47.300754   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.300759   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:47.300764   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:47.300819   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:47.325015   39074 cri.go:89] found id: ""
	I1002 20:21:47.325030   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.325037   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:47.325042   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:47.325086   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:47.349241   39074 cri.go:89] found id: ""
	I1002 20:21:47.349256   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.349264   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:47.349270   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:47.349322   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:47.373778   39074 cri.go:89] found id: ""
	I1002 20:21:47.373790   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.373796   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:47.373801   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:47.373847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:47.397514   39074 cri.go:89] found id: ""
	I1002 20:21:47.397527   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.397532   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:47.397539   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:47.397550   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:47.452728   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:47.446108   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.446609   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448123   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448540   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.450035   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:47.446108   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.446609   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448123   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448540   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.450035   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:47.452738   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:47.452748   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:47.513401   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:47.513419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:47.542325   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:47.542339   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:47.607380   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:47.607397   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:50.119560   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:50.129969   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:50.130031   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:50.154300   39074 cri.go:89] found id: ""
	I1002 20:21:50.154314   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.154322   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:50.154329   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:50.154381   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:50.178814   39074 cri.go:89] found id: ""
	I1002 20:21:50.178831   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.178840   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:50.178846   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:50.178886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:50.202532   39074 cri.go:89] found id: ""
	I1002 20:21:50.202546   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.202553   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:50.202558   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:50.202597   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:50.227602   39074 cri.go:89] found id: ""
	I1002 20:21:50.227620   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.227630   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:50.227636   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:50.227705   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:50.254467   39074 cri.go:89] found id: ""
	I1002 20:21:50.254479   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.254485   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:50.254490   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:50.254534   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:50.279114   39074 cri.go:89] found id: ""
	I1002 20:21:50.279132   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.279141   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:50.279147   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:50.279196   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:50.303673   39074 cri.go:89] found id: ""
	I1002 20:21:50.303689   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.303695   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:50.303703   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:50.303712   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:50.367227   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:50.367244   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:50.394498   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:50.394517   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:50.463556   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:50.463573   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:50.475248   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:50.475266   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:50.530138   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:50.523630   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.524260   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.525840   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.526247   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.527437   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:50.523630   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.524260   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.525840   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.526247   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.527437   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:53.031819   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:53.042276   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:53.042319   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:53.066835   39074 cri.go:89] found id: ""
	I1002 20:21:53.066850   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.066865   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:53.066872   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:53.066914   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:53.090995   39074 cri.go:89] found id: ""
	I1002 20:21:53.091008   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.091014   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:53.091018   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:53.091057   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:53.116027   39074 cri.go:89] found id: ""
	I1002 20:21:53.116043   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.116051   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:53.116056   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:53.116097   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:53.141627   39074 cri.go:89] found id: ""
	I1002 20:21:53.141640   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.141661   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:53.141668   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:53.141710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:53.167140   39074 cri.go:89] found id: ""
	I1002 20:21:53.167157   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.167163   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:53.167167   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:53.167210   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:53.190437   39074 cri.go:89] found id: ""
	I1002 20:21:53.190453   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.190459   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:53.190464   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:53.190506   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:53.214513   39074 cri.go:89] found id: ""
	I1002 20:21:53.214527   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.214534   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:53.214541   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:53.214550   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:53.282233   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:53.282249   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:53.293348   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:53.293361   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:53.347988   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:53.341334   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.341823   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343307   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343741   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.345249   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:53.341334   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.341823   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343307   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343741   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.345249   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:53.347998   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:53.348008   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:53.407000   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:53.407019   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:55.936592   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:55.946748   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:55.946803   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:55.971330   39074 cri.go:89] found id: ""
	I1002 20:21:55.971347   39074 logs.go:282] 0 containers: []
	W1002 20:21:55.971353   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:55.971358   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:55.971398   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:55.995571   39074 cri.go:89] found id: ""
	I1002 20:21:55.995585   39074 logs.go:282] 0 containers: []
	W1002 20:21:55.995591   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:55.995595   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:55.995635   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:56.020541   39074 cri.go:89] found id: ""
	I1002 20:21:56.020563   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.020573   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:56.020578   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:56.020620   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:56.045458   39074 cri.go:89] found id: ""
	I1002 20:21:56.045474   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.045480   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:56.045485   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:56.045524   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:56.069082   39074 cri.go:89] found id: ""
	I1002 20:21:56.069094   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.069101   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:56.069105   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:56.069150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:56.094402   39074 cri.go:89] found id: ""
	I1002 20:21:56.094417   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.094425   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:56.094430   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:56.094471   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:56.118733   39074 cri.go:89] found id: ""
	I1002 20:21:56.118748   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.118755   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:56.118764   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:56.118776   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:56.186773   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:56.186792   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:56.198306   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:56.198321   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:56.253135   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:56.246592   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.247035   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.248560   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.249003   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.250528   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:56.246592   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.247035   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.248560   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.249003   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.250528   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:56.253144   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:56.253156   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:56.313368   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:56.313384   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:58.841758   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:58.852748   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:58.852795   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:58.878085   39074 cri.go:89] found id: ""
	I1002 20:21:58.878101   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.878109   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:58.878115   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:58.878169   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:58.903034   39074 cri.go:89] found id: ""
	I1002 20:21:58.903047   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.903054   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:58.903058   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:58.903097   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:58.928063   39074 cri.go:89] found id: ""
	I1002 20:21:58.928079   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.928085   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:58.928090   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:58.928132   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:58.953963   39074 cri.go:89] found id: ""
	I1002 20:21:58.953976   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.953982   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:58.953987   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:58.954039   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:58.980346   39074 cri.go:89] found id: ""
	I1002 20:21:58.980363   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.980372   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:58.980379   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:58.980430   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:59.006332   39074 cri.go:89] found id: ""
	I1002 20:21:59.006348   39074 logs.go:282] 0 containers: []
	W1002 20:21:59.006357   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:59.006364   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:59.006422   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:59.030980   39074 cri.go:89] found id: ""
	I1002 20:21:59.030995   39074 logs.go:282] 0 containers: []
	W1002 20:21:59.031004   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:59.031013   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:59.031026   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:59.086481   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:59.079666   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.080350   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.081980   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.082417   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.083935   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:59.079666   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.080350   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.081980   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.082417   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.083935   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:59.086489   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:59.086498   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:59.150520   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:59.150539   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:59.178745   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:59.178759   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:59.248128   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:59.248146   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:01.761244   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:01.771733   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:01.771783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:01.796879   39074 cri.go:89] found id: ""
	I1002 20:22:01.796894   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.796903   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:01.796908   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:01.796951   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:01.822376   39074 cri.go:89] found id: ""
	I1002 20:22:01.822389   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.822395   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:01.822400   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:01.822445   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:01.847608   39074 cri.go:89] found id: ""
	I1002 20:22:01.847622   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.847628   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:01.847633   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:01.847701   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:01.872893   39074 cri.go:89] found id: ""
	I1002 20:22:01.872913   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.872919   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:01.872924   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:01.872995   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:01.899179   39074 cri.go:89] found id: ""
	I1002 20:22:01.899197   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.899205   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:01.899210   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:01.899258   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:01.925133   39074 cri.go:89] found id: ""
	I1002 20:22:01.925149   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.925158   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:01.925165   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:01.925209   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:01.951281   39074 cri.go:89] found id: ""
	I1002 20:22:01.951294   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.951300   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:01.951307   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:01.951316   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:02.008670   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:02.001480   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.002372   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004005   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004404   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.005795   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:02.001480   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.002372   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004005   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004404   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.005795   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:02.008684   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:02.008697   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:02.072947   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:02.072969   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:02.102011   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:02.102027   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:02.168431   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:02.168449   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:04.680455   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:04.690926   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:04.690981   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:04.715368   39074 cri.go:89] found id: ""
	I1002 20:22:04.715384   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.715390   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:04.715394   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:04.715438   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:04.739937   39074 cri.go:89] found id: ""
	I1002 20:22:04.739951   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.739956   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:04.739960   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:04.739998   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:04.763534   39074 cri.go:89] found id: ""
	I1002 20:22:04.763546   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.763552   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:04.763556   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:04.763615   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:04.788497   39074 cri.go:89] found id: ""
	I1002 20:22:04.788512   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.788519   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:04.788523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:04.788571   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:04.813000   39074 cri.go:89] found id: ""
	I1002 20:22:04.813012   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.813018   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:04.813022   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:04.813061   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:04.837324   39074 cri.go:89] found id: ""
	I1002 20:22:04.837336   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.837342   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:04.837347   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:04.837387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:04.863392   39074 cri.go:89] found id: ""
	I1002 20:22:04.863404   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.863410   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:04.863416   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:04.863425   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:04.917001   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:04.910495   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.911044   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912561   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912981   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.914415   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:04.910495   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.911044   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912561   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912981   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.914415   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:04.917008   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:04.917017   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:04.980350   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:04.980366   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:05.007566   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:05.007580   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:05.076403   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:05.076419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:07.589145   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:07.599347   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:07.599390   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:07.623799   39074 cri.go:89] found id: ""
	I1002 20:22:07.623812   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.623818   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:07.623823   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:07.623862   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:07.648210   39074 cri.go:89] found id: ""
	I1002 20:22:07.648222   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.648229   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:07.648233   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:07.648279   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:07.672861   39074 cri.go:89] found id: ""
	I1002 20:22:07.672874   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.672880   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:07.672885   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:07.672933   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:07.696504   39074 cri.go:89] found id: ""
	I1002 20:22:07.696521   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.696530   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:07.696535   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:07.696577   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:07.722324   39074 cri.go:89] found id: ""
	I1002 20:22:07.722340   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.722346   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:07.722351   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:07.722391   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:07.748388   39074 cri.go:89] found id: ""
	I1002 20:22:07.748402   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.748408   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:07.748412   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:07.748449   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:07.773539   39074 cri.go:89] found id: ""
	I1002 20:22:07.773557   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.773564   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:07.773570   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:07.773579   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:07.843853   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:07.843875   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:07.855493   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:07.855511   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:07.909935   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:07.903152   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.903746   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905310   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905743   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.907276   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:07.903152   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.903746   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905310   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905743   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.907276   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:07.909945   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:07.909955   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:07.971055   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:07.971072   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:10.498842   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:10.509052   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:10.509100   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:10.532641   39074 cri.go:89] found id: ""
	I1002 20:22:10.532673   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.532683   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:10.532689   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:10.532737   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:10.555850   39074 cri.go:89] found id: ""
	I1002 20:22:10.555865   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.555872   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:10.555877   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:10.555943   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:10.579608   39074 cri.go:89] found id: ""
	I1002 20:22:10.579623   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.579631   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:10.579636   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:10.579701   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:10.603930   39074 cri.go:89] found id: ""
	I1002 20:22:10.603945   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.603954   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:10.603960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:10.604006   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:10.627050   39074 cri.go:89] found id: ""
	I1002 20:22:10.627063   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.627070   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:10.627074   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:10.627115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:10.650231   39074 cri.go:89] found id: ""
	I1002 20:22:10.650246   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.650254   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:10.650261   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:10.650309   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:10.674381   39074 cri.go:89] found id: ""
	I1002 20:22:10.674396   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.674404   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:10.674413   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:10.674422   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:10.743365   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:10.743388   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:10.754432   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:10.754446   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:10.809037   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:10.802468   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.802995   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804524   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804992   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.806544   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:10.802468   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.802995   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804524   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804992   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.806544   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:10.809051   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:10.809061   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:10.866627   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:10.866642   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:13.395270   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:13.405561   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:13.405603   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:13.429063   39074 cri.go:89] found id: ""
	I1002 20:22:13.429076   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.429081   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:13.429086   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:13.429125   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:13.452589   39074 cri.go:89] found id: ""
	I1002 20:22:13.452604   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.452609   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:13.452613   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:13.452669   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:13.476844   39074 cri.go:89] found id: ""
	I1002 20:22:13.476856   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.476862   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:13.476866   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:13.476905   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:13.501936   39074 cri.go:89] found id: ""
	I1002 20:22:13.501948   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.501955   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:13.501960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:13.502000   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:13.526895   39074 cri.go:89] found id: ""
	I1002 20:22:13.526907   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.526913   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:13.526917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:13.526968   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:13.550888   39074 cri.go:89] found id: ""
	I1002 20:22:13.550902   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.550910   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:13.550914   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:13.550960   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:13.573769   39074 cri.go:89] found id: ""
	I1002 20:22:13.573784   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.573790   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:13.573796   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:13.573807   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:13.626468   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:13.620002   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.620523   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622171   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622562   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.623979   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:13.620002   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.620523   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622171   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622562   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.623979   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:13.626477   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:13.626485   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:13.685732   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:13.685747   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:13.713954   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:13.713970   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:13.785525   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:13.785541   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:16.298756   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:16.309103   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:16.309143   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:16.335506   39074 cri.go:89] found id: ""
	I1002 20:22:16.335521   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.335529   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:16.335535   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:16.335586   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:16.359417   39074 cri.go:89] found id: ""
	I1002 20:22:16.359431   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.359437   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:16.359442   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:16.359482   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:16.383496   39074 cri.go:89] found id: ""
	I1002 20:22:16.383509   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.383517   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:16.383523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:16.383578   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:16.409227   39074 cri.go:89] found id: ""
	I1002 20:22:16.409243   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.409250   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:16.409254   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:16.409294   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:16.433847   39074 cri.go:89] found id: ""
	I1002 20:22:16.433861   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.433870   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:16.433876   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:16.433933   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:16.457278   39074 cri.go:89] found id: ""
	I1002 20:22:16.457293   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.457299   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:16.457306   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:16.457345   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:16.482697   39074 cri.go:89] found id: ""
	I1002 20:22:16.482709   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.482715   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:16.482721   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:16.482730   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:16.548732   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:16.548752   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:16.559732   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:16.559747   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:16.612487   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:16.606170   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.606702   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608183   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608579   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.610023   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:16.606170   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.606702   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608183   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608579   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.610023   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:16.612499   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:16.612511   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:16.671684   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:16.671702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:19.200094   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:19.210479   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:19.210527   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:19.235486   39074 cri.go:89] found id: ""
	I1002 20:22:19.235501   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.235510   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:19.235515   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:19.235560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:19.259294   39074 cri.go:89] found id: ""
	I1002 20:22:19.259305   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.259312   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:19.259316   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:19.259353   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:19.283859   39074 cri.go:89] found id: ""
	I1002 20:22:19.283875   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.283884   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:19.283889   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:19.283941   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:19.307454   39074 cri.go:89] found id: ""
	I1002 20:22:19.307468   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.307473   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:19.307477   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:19.307519   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:19.332321   39074 cri.go:89] found id: ""
	I1002 20:22:19.332334   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.332340   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:19.332345   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:19.332384   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:19.356798   39074 cri.go:89] found id: ""
	I1002 20:22:19.356818   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.356826   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:19.356832   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:19.356886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:19.382609   39074 cri.go:89] found id: ""
	I1002 20:22:19.382624   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.382632   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:19.382641   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:19.382662   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:19.409876   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:19.409890   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:19.476525   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:19.476540   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:19.487600   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:19.487616   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:19.540532   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:19.533762   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.534339   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.535957   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.536415   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.537924   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:19.533762   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.534339   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.535957   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.536415   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.537924   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:19.540541   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:19.540552   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:22.106355   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:22.116499   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:22.116552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:22.142485   39074 cri.go:89] found id: ""
	I1002 20:22:22.142499   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.142507   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:22.142514   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:22.142561   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:22.168287   39074 cri.go:89] found id: ""
	I1002 20:22:22.168301   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.168308   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:22.168312   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:22.168352   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:22.192639   39074 cri.go:89] found id: ""
	I1002 20:22:22.192666   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.192674   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:22.192680   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:22.192726   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:22.217360   39074 cri.go:89] found id: ""
	I1002 20:22:22.217375   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.217383   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:22.217390   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:22.217436   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:22.241729   39074 cri.go:89] found id: ""
	I1002 20:22:22.241744   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.241753   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:22.241759   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:22.241809   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:22.266793   39074 cri.go:89] found id: ""
	I1002 20:22:22.266810   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.266817   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:22.266822   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:22.266866   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:22.289775   39074 cri.go:89] found id: ""
	I1002 20:22:22.289789   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.289794   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:22.289801   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:22.289809   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:22.344340   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:22.337274   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.337797   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339350   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339784   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.341397   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:22.337274   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.337797   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339350   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339784   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.341397   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:22.344350   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:22.344362   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:22.404393   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:22.404410   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:22.432171   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:22.432186   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:22.498216   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:22.498233   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:25.010156   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:25.020516   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:25.020560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:25.045455   39074 cri.go:89] found id: ""
	I1002 20:22:25.045470   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.045480   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:25.045486   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:25.045529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:25.070018   39074 cri.go:89] found id: ""
	I1002 20:22:25.070031   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.070037   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:25.070041   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:25.070080   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:25.093191   39074 cri.go:89] found id: ""
	I1002 20:22:25.093204   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.093210   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:25.093214   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:25.093257   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:25.117770   39074 cri.go:89] found id: ""
	I1002 20:22:25.117782   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.117788   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:25.117793   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:25.117834   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:25.141300   39074 cri.go:89] found id: ""
	I1002 20:22:25.141315   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.141325   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:25.141331   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:25.141383   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:25.165980   39074 cri.go:89] found id: ""
	I1002 20:22:25.165993   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.165999   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:25.166003   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:25.166041   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:25.191730   39074 cri.go:89] found id: ""
	I1002 20:22:25.191742   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.191749   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:25.191757   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:25.191766   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:25.259005   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:25.259025   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:25.270639   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:25.270673   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:25.324592   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:25.317379   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.317967   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.319572   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.320064   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.321617   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:25.317379   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.317967   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.319572   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.320064   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.321617   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:25.324602   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:25.324614   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:25.385501   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:25.385519   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:27.914463   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:27.925227   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:27.925271   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:27.948666   39074 cri.go:89] found id: ""
	I1002 20:22:27.948681   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.948690   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:27.948695   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:27.948735   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:27.972698   39074 cri.go:89] found id: ""
	I1002 20:22:27.972711   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.972716   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:27.972720   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:27.972765   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:27.996954   39074 cri.go:89] found id: ""
	I1002 20:22:27.996970   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.996979   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:27.996984   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:27.997029   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:28.022092   39074 cri.go:89] found id: ""
	I1002 20:22:28.022109   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.022117   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:28.022123   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:28.022164   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:28.047808   39074 cri.go:89] found id: ""
	I1002 20:22:28.047824   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.047831   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:28.047836   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:28.047876   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:28.071793   39074 cri.go:89] found id: ""
	I1002 20:22:28.071807   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.071816   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:28.071822   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:28.071868   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:28.096447   39074 cri.go:89] found id: ""
	I1002 20:22:28.096462   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.096471   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:28.096479   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:28.096489   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:28.107018   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:28.107032   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:28.159925   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:28.153221   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.153766   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155300   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155764   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.157273   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:28.153221   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.153766   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155300   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155764   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.157273   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:28.159935   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:28.159945   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:28.219759   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:28.219776   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:28.247325   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:28.247345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:30.813772   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:30.824079   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:30.824122   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:30.847714   39074 cri.go:89] found id: ""
	I1002 20:22:30.847727   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.847734   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:30.847739   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:30.847783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:30.870579   39074 cri.go:89] found id: ""
	I1002 20:22:30.870612   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.870619   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:30.870623   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:30.870686   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:30.894513   39074 cri.go:89] found id: ""
	I1002 20:22:30.894528   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.894537   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:30.894542   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:30.894591   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:30.919171   39074 cri.go:89] found id: ""
	I1002 20:22:30.919186   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.919191   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:30.919196   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:30.919236   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:30.943990   39074 cri.go:89] found id: ""
	I1002 20:22:30.944003   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.944009   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:30.944013   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:30.944054   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:30.968147   39074 cri.go:89] found id: ""
	I1002 20:22:30.968162   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.968170   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:30.968178   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:30.968227   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:30.991705   39074 cri.go:89] found id: ""
	I1002 20:22:30.991717   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.991722   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:30.991729   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:30.991740   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:31.046303   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:31.038433   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.039034   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.040929   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.041723   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.043230   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:31.038433   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.039034   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.040929   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.041723   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.043230   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:31.046314   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:31.046325   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:31.105380   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:31.105397   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:31.132347   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:31.132363   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:31.202102   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:31.202119   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:33.715172   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:33.725339   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:33.725386   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:33.750520   39074 cri.go:89] found id: ""
	I1002 20:22:33.750534   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.750543   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:33.750549   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:33.750595   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:33.773913   39074 cri.go:89] found id: ""
	I1002 20:22:33.773928   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.773937   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:33.773943   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:33.773991   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:33.797530   39074 cri.go:89] found id: ""
	I1002 20:22:33.797545   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.797554   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:33.797560   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:33.797630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:33.821852   39074 cri.go:89] found id: ""
	I1002 20:22:33.821871   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.821879   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:33.821885   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:33.821934   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:33.846332   39074 cri.go:89] found id: ""
	I1002 20:22:33.846348   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.846356   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:33.846362   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:33.846400   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:33.870615   39074 cri.go:89] found id: ""
	I1002 20:22:33.870629   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.870639   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:33.870657   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:33.870706   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:33.895226   39074 cri.go:89] found id: ""
	I1002 20:22:33.895241   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.895250   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:33.895266   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:33.895276   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:33.955530   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:33.955547   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:33.983183   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:33.983198   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:34.049224   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:34.049251   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:34.060667   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:34.060686   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:34.114666   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:34.107838   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.108343   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.109897   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.110325   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.111840   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:34.107838   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.108343   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.109897   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.110325   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.111840   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:36.616388   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:36.626616   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:36.626688   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:36.652926   39074 cri.go:89] found id: ""
	I1002 20:22:36.652947   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.652957   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:36.652965   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:36.653011   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:36.676048   39074 cri.go:89] found id: ""
	I1002 20:22:36.676060   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.676066   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:36.676071   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:36.676115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:36.700475   39074 cri.go:89] found id: ""
	I1002 20:22:36.700489   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.700499   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:36.700505   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:36.700546   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:36.724541   39074 cri.go:89] found id: ""
	I1002 20:22:36.724559   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.724567   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:36.724576   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:36.724623   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:36.748967   39074 cri.go:89] found id: ""
	I1002 20:22:36.748982   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.748991   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:36.748997   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:36.749043   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:36.773168   39074 cri.go:89] found id: ""
	I1002 20:22:36.773183   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.773191   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:36.773197   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:36.773249   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:36.796981   39074 cri.go:89] found id: ""
	I1002 20:22:36.796997   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.797003   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:36.797011   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:36.797023   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:36.867000   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:36.867018   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:36.878017   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:36.878031   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:36.931114   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:36.924389   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.924871   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926348   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926766   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.928319   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:36.924389   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.924871   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926348   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926766   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.928319   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:36.931129   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:36.931137   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:36.993849   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:36.993868   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:39.524626   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:39.535502   39074 kubeadm.go:601] duration metric: took 4m1.714069333s to restartPrimaryControlPlane
	W1002 20:22:39.535572   39074 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 20:22:39.535638   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:22:39.981011   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:22:39.993244   39074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:22:40.001158   39074 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:22:40.001211   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:22:40.008736   39074 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:22:40.008749   39074 kubeadm.go:157] found existing configuration files:
	
	I1002 20:22:40.008782   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:22:40.015964   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:22:40.016000   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:22:40.022839   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:22:40.030026   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:22:40.030064   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:22:40.036752   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:22:40.043720   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:22:40.043755   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:22:40.050532   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:22:40.057416   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:22:40.057453   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:22:40.063936   39074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:22:40.116427   39074 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:22:40.171173   39074 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:26:42.624936   39074 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:26:42.625021   39074 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:26:42.627908   39074 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:26:42.627954   39074 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:26:42.628043   39074 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:26:42.628106   39074 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:26:42.628137   39074 kubeadm.go:318] OS: Linux
	I1002 20:26:42.628173   39074 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:26:42.628211   39074 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:26:42.628278   39074 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:26:42.628331   39074 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:26:42.628370   39074 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:26:42.628412   39074 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:26:42.628451   39074 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:26:42.628487   39074 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:26:42.628556   39074 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:26:42.628674   39074 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:26:42.628787   39074 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:26:42.628860   39074 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:26:42.630666   39074 out.go:252]   - Generating certificates and keys ...
	I1002 20:26:42.630736   39074 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:26:42.630813   39074 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:26:42.630900   39074 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:26:42.630973   39074 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:26:42.631035   39074 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:26:42.631078   39074 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:26:42.631142   39074 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:26:42.631194   39074 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:26:42.631256   39074 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:26:42.631324   39074 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:26:42.631354   39074 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:26:42.631399   39074 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:26:42.631441   39074 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:26:42.631487   39074 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:26:42.631529   39074 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:26:42.631595   39074 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:26:42.631671   39074 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:26:42.631741   39074 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:26:42.631812   39074 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:26:42.633616   39074 out.go:252]   - Booting up control plane ...
	I1002 20:26:42.633716   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:26:42.633796   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:26:42.633850   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:26:42.633948   39074 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:26:42.634026   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:26:42.634114   39074 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:26:42.634190   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:26:42.634222   39074 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:26:42.634348   39074 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:26:42.634448   39074 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:26:42.634515   39074 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000852315s
	I1002 20:26:42.634627   39074 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:26:42.634725   39074 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:26:42.634809   39074 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:26:42.634907   39074 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:26:42.635026   39074 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000487211s
	I1002 20:26:42.635115   39074 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000480662s
	I1002 20:26:42.635180   39074 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000745923s
	I1002 20:26:42.635185   39074 kubeadm.go:318] 
	I1002 20:26:42.635259   39074 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:26:42.635324   39074 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:26:42.635395   39074 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:26:42.635478   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:26:42.635541   39074 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:26:42.635608   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:26:42.635644   39074 kubeadm.go:318] 
	W1002 20:26:42.635735   39074 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000852315s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000487211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000480662s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000745923s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:26:42.635812   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:26:43.072992   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:26:43.084946   39074 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:26:43.084987   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:26:43.092545   39074 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:26:43.092552   39074 kubeadm.go:157] found existing configuration files:
	
	I1002 20:26:43.092583   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:26:43.099679   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:26:43.099725   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:26:43.106411   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:26:43.113271   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:26:43.113302   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:26:43.120089   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:26:43.126923   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:26:43.126953   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:26:43.133686   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:26:43.140427   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:26:43.140454   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:26:43.147131   39074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:26:43.180956   39074 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:26:43.181017   39074 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:26:43.199951   39074 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:26:43.200009   39074 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:26:43.200037   39074 kubeadm.go:318] OS: Linux
	I1002 20:26:43.200076   39074 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:26:43.200114   39074 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:26:43.200153   39074 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:26:43.200196   39074 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:26:43.200234   39074 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:26:43.200272   39074 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:26:43.200315   39074 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:26:43.200350   39074 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:26:43.254197   39074 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:26:43.254330   39074 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:26:43.254435   39074 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:26:43.260331   39074 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:26:43.264543   39074 out.go:252]   - Generating certificates and keys ...
	I1002 20:26:43.264610   39074 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:26:43.264706   39074 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:26:43.264789   39074 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:26:43.264843   39074 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:26:43.264905   39074 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:26:43.264949   39074 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:26:43.265012   39074 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:26:43.265062   39074 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:26:43.265129   39074 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:26:43.265188   39074 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:26:43.265219   39074 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:26:43.265265   39074 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:26:43.505091   39074 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:26:43.932140   39074 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:26:44.064643   39074 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:26:44.173218   39074 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:26:44.534380   39074 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:26:44.534804   39074 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:26:44.538135   39074 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:26:44.539757   39074 out.go:252]   - Booting up control plane ...
	I1002 20:26:44.539881   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:26:44.539950   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:26:44.540002   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:26:44.553179   39074 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:26:44.553329   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:26:44.559491   39074 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:26:44.559770   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:26:44.559808   39074 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:26:44.659881   39074 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:26:44.660026   39074 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:26:45.660495   39074 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000782032s
	I1002 20:26:45.664397   39074 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:26:45.664522   39074 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:26:45.664595   39074 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:26:45.664676   39074 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:30:45.665391   39074 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	I1002 20:30:45.665506   39074 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	I1002 20:30:45.665618   39074 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	I1002 20:30:45.665634   39074 kubeadm.go:318] 
	I1002 20:30:45.665788   39074 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:30:45.665904   39074 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:30:45.665995   39074 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:30:45.666081   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:30:45.666142   39074 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:30:45.666213   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:30:45.666216   39074 kubeadm.go:318] 
	I1002 20:30:45.669103   39074 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:30:45.669219   39074 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:30:45.669740   39074 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:30:45.669792   39074 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:30:45.669843   39074 kubeadm.go:402] duration metric: took 12m7.882478982s to StartCluster
	I1002 20:30:45.669874   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:30:45.669917   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:30:45.695577   39074 cri.go:89] found id: ""
	I1002 20:30:45.695596   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.695603   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:30:45.695610   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:30:45.695674   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:30:45.719440   39074 cri.go:89] found id: ""
	I1002 20:30:45.719456   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.719464   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:30:45.719469   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:30:45.719511   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:30:45.743166   39074 cri.go:89] found id: ""
	I1002 20:30:45.743181   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.743190   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:30:45.743195   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:30:45.743238   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:30:45.767934   39074 cri.go:89] found id: ""
	I1002 20:30:45.767959   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.767967   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:30:45.767974   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:30:45.768019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:30:45.792091   39074 cri.go:89] found id: ""
	I1002 20:30:45.792102   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.792108   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:30:45.792112   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:30:45.792150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:30:45.815448   39074 cri.go:89] found id: ""
	I1002 20:30:45.815463   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.815469   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:30:45.815475   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:30:45.815518   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:30:45.840287   39074 cri.go:89] found id: ""
	I1002 20:30:45.840299   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.840305   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:30:45.840312   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:30:45.840321   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:30:45.868158   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:30:45.868172   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:30:45.936734   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:30:45.936752   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:30:45.948158   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:30:45.948175   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:30:46.002360   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:30:45.995517   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.996138   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.997668   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.998069   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.999573   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:30:45.995517   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.996138   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.997668   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.998069   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.999573   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:30:46.002381   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:30:46.002392   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1002 20:30:46.065214   39074 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:30:46.065257   39074 out.go:285] * 
	W1002 20:30:46.065383   39074 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:30:46.065406   39074 out.go:285] * 
	W1002 20:30:46.067075   39074 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:30:46.070473   39074 out.go:203] 
	W1002 20:30:46.071639   39074 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:30:46.071666   39074 out.go:285] * 
	I1002 20:30:46.072909   39074 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.578716314Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8edcacbd-1d81-4dc6-8f4f-0fd70bfdb6c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.579524457Z" level=info msg="createCtr: deleting container ID 011458c3484a34a4761c138ce28bea0b5d171a4a446a98a8b6ccbe16d0a221cc from idIndex" id=2ef148b8-b8dc-44a5-ac5b-5c4d911f40c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.579554899Z" level=info msg="createCtr: removing container 011458c3484a34a4761c138ce28bea0b5d171a4a446a98a8b6ccbe16d0a221cc" id=2ef148b8-b8dc-44a5-ac5b-5c4d911f40c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.579581652Z" level=info msg="createCtr: deleting container 011458c3484a34a4761c138ce28bea0b5d171a4a446a98a8b6ccbe16d0a221cc from storage" id=2ef148b8-b8dc-44a5-ac5b-5c4d911f40c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.57987229Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=4477bd8b-de09-41de-aae8-add16bf7fb2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.580027245Z" level=info msg="createCtr: deleting container ID 6d4c64b92b255a273f9b5f60b5c744e62abda7ace9eb8d6b1381ab5d42947186 from idIndex" id=8edcacbd-1d81-4dc6-8f4f-0fd70bfdb6c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.580054529Z" level=info msg="createCtr: removing container 6d4c64b92b255a273f9b5f60b5c744e62abda7ace9eb8d6b1381ab5d42947186" id=8edcacbd-1d81-4dc6-8f4f-0fd70bfdb6c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.580088794Z" level=info msg="createCtr: deleting container 6d4c64b92b255a273f9b5f60b5c744e62abda7ace9eb8d6b1381ab5d42947186 from storage" id=8edcacbd-1d81-4dc6-8f4f-0fd70bfdb6c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.581202962Z" level=info msg="createCtr: deleting container ID 9f3e30af1a945c60f4428061f6cbb4af46ff7b7aa3f4cc4da6d6c8ff909669ac from idIndex" id=4477bd8b-de09-41de-aae8-add16bf7fb2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.581236233Z" level=info msg="createCtr: removing container 9f3e30af1a945c60f4428061f6cbb4af46ff7b7aa3f4cc4da6d6c8ff909669ac" id=4477bd8b-de09-41de-aae8-add16bf7fb2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.581268315Z" level=info msg="createCtr: deleting container 9f3e30af1a945c60f4428061f6cbb4af46ff7b7aa3f4cc4da6d6c8ff909669ac from storage" id=4477bd8b-de09-41de-aae8-add16bf7fb2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.582774964Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753218_kube-system_b932b0024653c86a7ea85a2a83a943a4_0" id=2ef148b8-b8dc-44a5-ac5b-5c4d911f40c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.584231912Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753218_kube-system_91f2b96cb3e3f380ded17f30e8d873bd_0" id=8edcacbd-1d81-4dc6-8f4f-0fd70bfdb6c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:39 functional-753218 crio[5814]: time="2025-10-02T20:30:39.584517216Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753218_kube-system_b25a71e49a335bbe853872de1b1e3093_0" id=4477bd8b-de09-41de-aae8-add16bf7fb2d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.546180204Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=9f48774c-99d7-4c53-9acd-56238a58b621 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.547057638Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c4827f6f-404c-4181-82ca-154f80cbb907 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.547869288Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-753218/kube-apiserver" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.548067236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.551126227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.551499586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.565235643Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.566508741Z" level=info msg="createCtr: deleting container ID 21140f6a02459c6df36ecf818ef775e79b3462280bfd0d1f16f28e3316920953 from idIndex" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.566538929Z" level=info msg="createCtr: removing container 21140f6a02459c6df36ecf818ef775e79b3462280bfd0d1f16f28e3316920953" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.566565962Z" level=info msg="createCtr: deleting container 21140f6a02459c6df36ecf818ef775e79b3462280bfd0d1f16f28e3316920953 from storage" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.568315977Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_802f0aebed1bb3dd62306b1d2076fd94_0" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:30:48.869696   15842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:48.870231   15842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:48.871727   15842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:48.872145   15842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:48.873605   15842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:30:48 up  1:13,  0 user,  load average: 0.08, 0.06, 0.07
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:30:39 functional-753218 kubelet[14925]:  > podSandboxID="938004d98ea751eb2eeff411184915e21872d6d9720257a5999ef0864a9cbb1c"
	Oct 02 20:30:39 functional-753218 kubelet[14925]: E1002 20:30:39.584538   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:39 functional-753218 kubelet[14925]:         container etcd start failed in pod etcd-functional-753218_kube-system(91f2b96cb3e3f380ded17f30e8d873bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:39 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:39 functional-753218 kubelet[14925]: E1002 20:30:39.584575   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753218" podUID="91f2b96cb3e3f380ded17f30e8d873bd"
	Oct 02 20:30:39 functional-753218 kubelet[14925]: E1002 20:30:39.584728   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:39 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:39 functional-753218 kubelet[14925]:  > podSandboxID="6ae6de7d398fa442f7f140a6767c4de14fdad57319542a7b5e3df53c8ac49d18"
	Oct 02 20:30:39 functional-753218 kubelet[14925]: E1002 20:30:39.584795   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:39 functional-753218 kubelet[14925]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:39 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:39 functional-753218 kubelet[14925]: E1002 20:30:39.585963   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.168537   14925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: I1002 20:30:42.321168   14925 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.321508   14925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.545784   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.568537   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:42 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:42 functional-753218 kubelet[14925]:  > podSandboxID="7a2fde0baea214f3eb0043d508edd186efa5f3f087d902573e164eb4765f9b5b"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.568614   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:42 functional-753218 kubelet[14925]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(802f0aebed1bb3dd62306b1d2076fd94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:42 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.568640   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="802f0aebed1bb3dd62306b1d2076fd94"
	Oct 02 20:30:45 functional-753218 kubelet[14925]: E1002 20:30:45.563684   14925 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753218\" not found"
	Oct 02 20:30:46 functional-753218 kubelet[14925]: E1002 20:30:46.169281   14925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac673e6f5d5d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753218 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,LastTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753218,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (281.131355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (1.75s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-753218 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-753218 apply -f testdata/invalidsvc.yaml: exit status 1 (56.992042ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-753218 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-753218 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-753218 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-753218 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-753218 --alsologtostderr -v=1] stderr:
I1002 20:31:03.400939   61762 out.go:360] Setting OutFile to fd 1 ...
I1002 20:31:03.401226   61762 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:03.401238   61762 out.go:374] Setting ErrFile to fd 2...
I1002 20:31:03.401244   61762 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:03.401458   61762 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
I1002 20:31:03.401738   61762 mustload.go:65] Loading cluster: functional-753218
I1002 20:31:03.402108   61762 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:03.402483   61762 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
I1002 20:31:03.419841   61762 host.go:66] Checking if "functional-753218" exists ...
I1002 20:31:03.420080   61762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:31:03.472427   61762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:03.462715312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 20:31:03.472538   61762 api_server.go:166] Checking apiserver status ...
I1002 20:31:03.472575   61762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 20:31:03.472610   61762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
I1002 20:31:03.490141   61762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
W1002 20:31:03.598444   61762 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1002 20:31:03.600528   61762 out.go:179] * The control-plane node functional-753218 apiserver is not running: (state=Stopped)
I1002 20:31:03.602029   61762 out.go:179]   To start a cluster, run: "minikube start -p functional-753218"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (296.875807ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-753218 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount          │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdspecific-port2212612994/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh            │ functional-753218 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh            │ functional-753218 ssh -- ls -la /mount-9p                                                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh            │ functional-753218 ssh sudo umount -f /mount-9p                                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service        │ functional-753218 service list                                                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount          │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount2 --alsologtostderr -v=1                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount          │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount1 --alsologtostderr -v=1                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount          │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount3 --alsologtostderr -v=1                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh            │ functional-753218 ssh findmnt -T /mount1                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service        │ functional-753218 service list -o json                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service        │ functional-753218 service --namespace=default --https --url hello-node                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service        │ functional-753218 service hello-node --url --format={{.IP}}                                                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh            │ functional-753218 ssh findmnt -T /mount1                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ service        │ functional-753218 service hello-node --url                                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh            │ functional-753218 ssh findmnt -T /mount2                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh            │ functional-753218 ssh findmnt -T /mount3                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ mount          │ -p functional-753218 --kill=true                                                                                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start          │ -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start          │ -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start          │ -p functional-753218 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-753218 --alsologtostderr -v=1                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ update-context │ functional-753218 update-context --alsologtostderr -v=2                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ update-context │ functional-753218 update-context --alsologtostderr -v=2                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ update-context │ functional-753218 update-context --alsologtostderr -v=2                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:31:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:31:01.900418   60624 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:31:01.900625   60624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.900633   60624 out.go:374] Setting ErrFile to fd 2...
	I1002 20:31:01.900637   60624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.900837   60624 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:31:01.901233   60624 out.go:368] Setting JSON to false
	I1002 20:31:01.902055   60624 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4411,"bootTime":1759432651,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:31:01.902136   60624 start.go:140] virtualization: kvm guest
	I1002 20:31:01.904282   60624 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:31:01.905775   60624 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:31:01.905831   60624 notify.go:221] Checking for updates...
	I1002 20:31:01.908487   60624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:31:01.909539   60624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:31:01.910782   60624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:31:01.912067   60624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:31:01.913370   60624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:31:01.915249   60624 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:31:01.915917   60624 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:31:01.940532   60624 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:31:01.940722   60624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:31:01.999857   60624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:01.988739527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:31:01.999965   60624 docker.go:319] overlay module found
	I1002 20:31:02.003791   60624 out.go:179] * Using the docker driver based on existing profile
	I1002 20:31:02.005402   60624 start.go:306] selected driver: docker
	I1002 20:31:02.005424   60624 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:31:02.005528   60624 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:31:02.005622   60624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:31:02.065972   60624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:02.054061844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:31:02.066877   60624 cni.go:84] Creating CNI manager for ""
	I1002 20:31:02.066944   60624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:31:02.066994   60624 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:31:02.069107   60624 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.697507819Z" level=info msg="Checking image status: kicbase/echo-server:functional-753218" id=fa393231-a5fd-49e9-8950-3e6bf6e4053d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720007372Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753218" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720140274Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753218 not found" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720190361Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753218 found" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742733677Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753218" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742868717Z" level=info msg="Image localhost/kicbase/echo-server:functional-753218 not found" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742909978Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753218 found" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.459772794Z" level=info msg="Checking image status: kicbase/echo-server:functional-753218" id=c8f7a097-87b5-4be9-96a8-83c5b0aea5dd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483212464Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753218" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483336385Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753218 not found" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483365009Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753218 found" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508218789Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753218" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508368222Z" level=info msg="Image localhost/kicbase/echo-server:functional-753218 not found" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508409995Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753218 found" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.546136327Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b91303cc-8916-495e-ab50-b39ca6a3e470 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.547120349Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f14b81fb-d2e6-4ab2-80c7-0d6ecf807ca9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.548289765Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-753218/kube-apiserver" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.548564978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.553541497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.554186326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.568588089Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570341207Z" level=info msg="createCtr: deleting container ID 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c from idIndex" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570379579Z" level=info msg="createCtr: removing container 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570421105Z" level=info msg="createCtr: deleting container 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c from storage" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.573125941Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_802f0aebed1bb3dd62306b1d2076fd94_0" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:31:04.591415   18123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:04.591983   18123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:04.593536   18123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:04.593992   18123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:04.595553   18123 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:31:04 up  1:13,  0 user,  load average: 0.88, 0.24, 0.13
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.170842   14925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac673e6f5d5d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753218 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,LastTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753218,}"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: I1002 20:30:56.325790   14925 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.326143   14925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:30:58 functional-753218 kubelet[14925]: E1002 20:30:58.463616   14925 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:31:00 functional-753218 kubelet[14925]: E1002 20:31:00.518140   14925 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 20:31:03 functional-753218 kubelet[14925]: E1002 20:31:03.171636   14925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:31:03 functional-753218 kubelet[14925]: I1002 20:31:03.327448   14925 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:31:03 functional-753218 kubelet[14925]: E1002 20:31:03.327895   14925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:31:04 functional-753218 kubelet[14925]: E1002 20:31:04.545732   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:31:04 functional-753218 kubelet[14925]: E1002 20:31:04.545876   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:31:04 functional-753218 kubelet[14925]: E1002 20:31:04.570044   14925 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 20:31:04 functional-753218 kubelet[14925]: E1002 20:31:04.580609   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:31:04 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:31:04 functional-753218 kubelet[14925]:  > podSandboxID="938004d98ea751eb2eeff411184915e21872d6d9720257a5999ef0864a9cbb1c"
	Oct 02 20:31:04 functional-753218 kubelet[14925]: E1002 20:31:04.580784   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:31:04 functional-753218 kubelet[14925]:         container etcd start failed in pod etcd-functional-753218_kube-system(91f2b96cb3e3f380ded17f30e8d873bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:31:04 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:31:04 functional-753218 kubelet[14925]: E1002 20:31:04.580828   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753218" podUID="91f2b96cb3e3f380ded17f30e8d873bd"
	Oct 02 20:31:04 functional-753218 kubelet[14925]: E1002 20:31:04.580990   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:31:04 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:31:04 functional-753218 kubelet[14925]:  > podSandboxID="ba5a822eb2aa1ee658392a97653e821bc6257f42bed995f9d1bb4bf5428596c9"
	Oct 02 20:31:04 functional-753218 kubelet[14925]: E1002 20:31:04.581058   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:31:04 functional-753218 kubelet[14925]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753218_kube-system(b932b0024653c86a7ea85a2a83a943a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:31:04 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:31:04 functional-753218 kubelet[14925]: E1002 20:31:04.582128   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753218" podUID="b932b0024653c86a7ea85a2a83a943a4"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (295.678594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 status: exit status 2 (317.129332ms)

                                                
                                                
-- stdout --
	functional-753218
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-753218 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (314.831034ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-753218 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 status -o json: exit status 2 (299.972117ms)

                                                
                                                
-- stdout --
	{"Name":"functional-753218","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-753218 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (296.196725ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ image   │ functional-753218 image save --daemon kicbase/echo-server:functional-753218 --alsologtostderr                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh sudo umount -f /mount-9p                                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount   │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdspecific-port2212612994/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh -- ls -la /mount-9p                                                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh sudo umount -f /mount-9p                                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service │ functional-753218 service list                                                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount   │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount2 --alsologtostderr -v=1                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount   │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount1 --alsologtostderr -v=1                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount   │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount3 --alsologtostderr -v=1                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh findmnt -T /mount1                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service │ functional-753218 service list -o json                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service │ functional-753218 service --namespace=default --https --url hello-node                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service │ functional-753218 service hello-node --url --format={{.IP}}                                                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh findmnt -T /mount1                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ service │ functional-753218 service hello-node --url                                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh findmnt -T /mount2                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh findmnt -T /mount3                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ mount   │ -p functional-753218 --kill=true                                                                                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start   │ -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start   │ -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start   │ -p functional-753218 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:31:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:31:01.900418   60624 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:31:01.900625   60624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.900633   60624 out.go:374] Setting ErrFile to fd 2...
	I1002 20:31:01.900637   60624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.900837   60624 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:31:01.901233   60624 out.go:368] Setting JSON to false
	I1002 20:31:01.902055   60624 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4411,"bootTime":1759432651,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:31:01.902136   60624 start.go:140] virtualization: kvm guest
	I1002 20:31:01.904282   60624 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:31:01.905775   60624 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:31:01.905831   60624 notify.go:221] Checking for updates...
	I1002 20:31:01.908487   60624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:31:01.909539   60624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:31:01.910782   60624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:31:01.912067   60624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:31:01.913370   60624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:31:01.915249   60624 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:31:01.915917   60624 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:31:01.940532   60624 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:31:01.940722   60624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:31:01.999857   60624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:01.988739527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:31:01.999965   60624 docker.go:319] overlay module found
	I1002 20:31:02.003791   60624 out.go:179] * Using the docker driver based on existing profile
	I1002 20:31:02.005402   60624 start.go:306] selected driver: docker
	I1002 20:31:02.005424   60624 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:31:02.005528   60624 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:31:02.005622   60624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:31:02.065972   60624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:02.054061844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:31:02.066877   60624 cni.go:84] Creating CNI manager for ""
	I1002 20:31:02.066944   60624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:31:02.066994   60624 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:31:02.069107   60624 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.697507819Z" level=info msg="Checking image status: kicbase/echo-server:functional-753218" id=fa393231-a5fd-49e9-8950-3e6bf6e4053d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720007372Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753218" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720140274Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753218 not found" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720190361Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753218 found" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742733677Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753218" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742868717Z" level=info msg="Image localhost/kicbase/echo-server:functional-753218 not found" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742909978Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753218 found" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.459772794Z" level=info msg="Checking image status: kicbase/echo-server:functional-753218" id=c8f7a097-87b5-4be9-96a8-83c5b0aea5dd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483212464Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753218" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483336385Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753218 not found" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483365009Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753218 found" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508218789Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753218" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508368222Z" level=info msg="Image localhost/kicbase/echo-server:functional-753218 not found" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508409995Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753218 found" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.546136327Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b91303cc-8916-495e-ab50-b39ca6a3e470 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.547120349Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f14b81fb-d2e6-4ab2-80c7-0d6ecf807ca9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.548289765Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-753218/kube-apiserver" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.548564978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.553541497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.554186326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.568588089Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570341207Z" level=info msg="createCtr: deleting container ID 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c from idIndex" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570379579Z" level=info msg="createCtr: removing container 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570421105Z" level=info msg="createCtr: deleting container 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c from storage" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.573125941Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_802f0aebed1bb3dd62306b1d2076fd94_0" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:31:03.201582   17831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:03.202020   17831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:03.203512   17831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:03.203906   17831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:03.205447   17831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:31:03 up  1:13,  0 user,  load average: 0.88, 0.24, 0.13
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:30:53 functional-753218 kubelet[14925]: E1002 20:30:53.583334   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753218" podUID="b932b0024653c86a7ea85a2a83a943a4"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.545043   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.566502   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:54 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:54 functional-753218 kubelet[14925]:  > podSandboxID="6ae6de7d398fa442f7f140a6767c4de14fdad57319542a7b5e3df53c8ac49d18"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.566605   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:54 functional-753218 kubelet[14925]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:54 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.566641   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.545737   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.564007   14925 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753218\" not found"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.573357   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:55 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:55 functional-753218 kubelet[14925]:  > podSandboxID="7a2fde0baea214f3eb0043d508edd186efa5f3f087d902573e164eb4765f9b5b"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.573464   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:55 functional-753218 kubelet[14925]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(802f0aebed1bb3dd62306b1d2076fd94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:55 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.573515   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="802f0aebed1bb3dd62306b1d2076fd94"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.170861   14925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.170842   14925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac673e6f5d5d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753218 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,LastTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753218,}"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: I1002 20:30:56.325790   14925 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.326143   14925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:30:58 functional-753218 kubelet[14925]: E1002 20:30:58.463616   14925 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:31:00 functional-753218 kubelet[14925]: E1002 20:31:00.518140   14925 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 20:31:03 functional-753218 kubelet[14925]: E1002 20:31:03.171636   14925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (297.041152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-753218 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-753218 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (43.386717ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-753218 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-753218 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-753218 describe po hello-node-connect: exit status 1 (46.372693ms)

                                                
                                                
** stderr ** 
	E1002 20:30:56.349833   57238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.350239   57238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.351618   57238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.351958   57238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.353313   57238 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-753218 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-753218 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-753218 logs -l app=hello-node-connect: exit status 1 (47.719699ms)

                                                
                                                
** stderr ** 
	E1002 20:30:56.397835   57252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.398152   57252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.399543   57252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.399812   57252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-753218 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-753218 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-753218 describe svc hello-node-connect: exit status 1 (45.569339ms)

                                                
                                                
** stderr ** 
	E1002 20:30:56.443537   57266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.443904   57266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.445292   57266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.445555   57266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:56.446910   57266 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-753218 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (308.458564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-753218 ssh -n functional-753218 sudo cat /home/docker/cp-test.txt                                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ cp      │ functional-753218 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh echo hello                                                                                                                                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh -n functional-753218 sudo cat /tmp/does/not/exist/cp-test.txt                                                                             │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ image   │ functional-753218 image ls                                                                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ tunnel  │ functional-753218 tunnel --alsologtostderr                                                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh cat /etc/hostname                                                                                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ image   │ functional-753218 image load --daemon kicbase/echo-server:functional-753218 --alsologtostderr                                                                   │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ tunnel  │ functional-753218 tunnel --alsologtostderr                                                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount   │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdany-port549600056/001:/mount-9p --alsologtostderr -v=1                                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ image   │ functional-753218 image ls                                                                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ image   │ functional-753218 image save kicbase/echo-server:functional-753218 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh -- ls -la /mount-9p                                                                                                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ image   │ functional-753218 image rm kicbase/echo-server:functional-753218 --alsologtostderr                                                                              │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh cat /mount-9p/test-1759437054175377768                                                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ image   │ functional-753218 image ls                                                                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ image   │ functional-753218 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ image   │ functional-753218 image save --daemon kicbase/echo-server:functional-753218 --alsologtostderr                                                                   │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount   │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdspecific-port2212612994/001:/mount-9p --alsologtostderr -v=1 --port 46464                               │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:34.206207   39074 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:34.206493   39074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:34.206497   39074 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:34.206500   39074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:34.206690   39074 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:18:34.207119   39074 out.go:368] Setting JSON to false
	I1002 20:18:34.208025   39074 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3663,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:18:34.208099   39074 start.go:140] virtualization: kvm guest
	I1002 20:18:34.211076   39074 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:18:34.212342   39074 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:18:34.212345   39074 notify.go:221] Checking for updates...
	I1002 20:18:34.213685   39074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:34.214912   39074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:18:34.216075   39074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:18:34.217175   39074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:18:34.218365   39074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:18:34.219862   39074 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:18:34.219970   39074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:18:34.243293   39074 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:18:34.243370   39074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:34.294846   39074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:18:34.285071909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:18:34.294933   39074 docker.go:319] overlay module found
	I1002 20:18:34.296853   39074 out.go:179] * Using the docker driver based on existing profile
	I1002 20:18:34.297994   39074 start.go:306] selected driver: docker
	I1002 20:18:34.298010   39074 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:34.298070   39074 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:18:34.298154   39074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:34.347576   39074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:18:34.338434102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:18:34.348199   39074 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:18:34.348218   39074 cni.go:84] Creating CNI manager for ""
	I1002 20:18:34.348268   39074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:34.348308   39074 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:34.350240   39074 out.go:179] * Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	I1002 20:18:34.351573   39074 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:18:34.353042   39074 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:18:34.354380   39074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:34.354407   39074 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:18:34.354414   39074 cache.go:59] Caching tarball of preloaded images
	I1002 20:18:34.354480   39074 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:18:34.354514   39074 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:18:34.354521   39074 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:18:34.354600   39074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:18:34.373723   39074 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:18:34.373737   39074 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:18:34.373750   39074 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:18:34.373779   39074 start.go:361] acquireMachinesLock for functional-753218: {Name:mk742badf6f1dbafca8397e398143e758831ae3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:18:34.373825   39074 start.go:365] duration metric: took 33.687µs to acquireMachinesLock for "functional-753218"
	I1002 20:18:34.373838   39074 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:18:34.373845   39074 fix.go:55] fixHost starting: 
	I1002 20:18:34.374037   39074 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:18:34.391194   39074 fix.go:113] recreateIfNeeded on functional-753218: state=Running err=<nil>
	W1002 20:18:34.391212   39074 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:18:34.393102   39074 out.go:252] * Updating the running docker "functional-753218" container ...
	I1002 20:18:34.393135   39074 machine.go:93] provisionDockerMachine start ...
	I1002 20:18:34.393196   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.410850   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.411066   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.411072   39074 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:18:34.552329   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:18:34.552359   39074 ubuntu.go:182] provisioning hostname "functional-753218"
	I1002 20:18:34.552416   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.570052   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.570307   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.570319   39074 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753218 && echo "functional-753218" | sudo tee /etc/hostname
	I1002 20:18:34.721441   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:18:34.721512   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.738897   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.739113   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.739125   39074 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753218/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:18:34.881059   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:18:34.881084   39074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:18:34.881113   39074 ubuntu.go:190] setting up certificates
	I1002 20:18:34.881121   39074 provision.go:84] configureAuth start
	I1002 20:18:34.881164   39074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:18:34.899501   39074 provision.go:143] copyHostCerts
	I1002 20:18:34.899560   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:18:34.899574   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:18:34.899678   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:18:34.899811   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:18:34.899820   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:18:34.899861   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:18:34.899952   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:18:34.899957   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:18:34.899992   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:18:34.900070   39074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.functional-753218 san=[127.0.0.1 192.168.49.2 functional-753218 localhost minikube]
	I1002 20:18:35.209717   39074 provision.go:177] copyRemoteCerts
	I1002 20:18:35.209761   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:18:35.209800   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.226488   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.326447   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:18:35.342793   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:18:35.359162   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:18:35.375197   39074 provision.go:87] duration metric: took 494.066038ms to configureAuth
	I1002 20:18:35.375214   39074 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:18:35.375353   39074 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:18:35.375460   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.392271   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:35.392535   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:35.392555   39074 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:18:35.662001   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:18:35.662017   39074 machine.go:96] duration metric: took 1.268875772s to provisionDockerMachine
	I1002 20:18:35.662029   39074 start.go:294] postStartSetup for "functional-753218" (driver="docker")
	I1002 20:18:35.662042   39074 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:18:35.662106   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:18:35.662147   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.679558   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.779752   39074 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:18:35.783115   39074 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:18:35.783131   39074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:18:35.783153   39074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:18:35.783280   39074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:18:35.783385   39074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:18:35.783488   39074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> hosts in /etc/test/nested/copy/12851
	I1002 20:18:35.783529   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12851
	I1002 20:18:35.791362   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:18:35.807703   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts --> /etc/test/nested/copy/12851/hosts (40 bytes)
	I1002 20:18:35.824578   39074 start.go:297] duration metric: took 162.536937ms for postStartSetup
	I1002 20:18:35.824707   39074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:18:35.824741   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.842117   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.939428   39074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:18:35.943787   39074 fix.go:57] duration metric: took 1.569934708s for fixHost
	I1002 20:18:35.943804   39074 start.go:84] releasing machines lock for "functional-753218", held for 1.569972452s
	I1002 20:18:35.943864   39074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:18:35.960772   39074 ssh_runner.go:195] Run: cat /version.json
	I1002 20:18:35.960815   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.960859   39074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:18:35.960900   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.978069   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.978425   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:36.126122   39074 ssh_runner.go:195] Run: systemctl --version
	I1002 20:18:36.132369   39074 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:18:36.165368   39074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:18:36.169751   39074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:18:36.169819   39074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:18:36.177394   39074 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:18:36.177405   39074 start.go:496] detecting cgroup driver to use...
	I1002 20:18:36.177434   39074 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:18:36.177487   39074 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:18:36.191941   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:18:36.203333   39074 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:18:36.203390   39074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:18:36.216968   39074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:18:36.228214   39074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:18:36.308949   39074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:18:36.392928   39074 docker.go:234] disabling docker service ...
	I1002 20:18:36.392976   39074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:18:36.406808   39074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:18:36.418402   39074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:18:36.501067   39074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:18:36.583824   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:18:36.595669   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:18:36.609110   39074 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:18:36.609154   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.617194   39074 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:18:36.617240   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.625324   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.633155   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.641048   39074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:18:36.648837   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.656786   39074 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.664478   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.672362   39074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:18:36.678936   39074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:18:36.685474   39074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:18:36.766185   39074 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:18:36.872474   39074 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:18:36.872521   39074 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:18:36.876161   39074 start.go:564] Will wait 60s for crictl version
	I1002 20:18:36.876199   39074 ssh_runner.go:195] Run: which crictl
	I1002 20:18:36.879320   39074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:18:36.901521   39074 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:18:36.901576   39074 ssh_runner.go:195] Run: crio --version
	I1002 20:18:36.927454   39074 ssh_runner.go:195] Run: crio --version
	I1002 20:18:36.955669   39074 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:18:36.956820   39074 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:18:36.973453   39074 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:18:36.979247   39074 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:18:36.980537   39074 kubeadm.go:883] updating cluster {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:18:36.980633   39074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:36.980707   39074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:18:37.012555   39074 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:18:37.012566   39074 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:18:37.012602   39074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:18:37.037114   39074 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:18:37.037125   39074 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:18:37.037130   39074 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:18:37.037235   39074 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:18:37.037301   39074 ssh_runner.go:195] Run: crio config
	I1002 20:18:37.080633   39074 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:18:37.080675   39074 cni.go:84] Creating CNI manager for ""
	I1002 20:18:37.080685   39074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:37.080697   39074 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:18:37.080715   39074 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753218 NodeName:functional-753218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:18:37.080819   39074 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:18:37.080866   39074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:18:37.088458   39074 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:18:37.088499   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:18:37.095835   39074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:18:37.107722   39074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:18:37.119278   39074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 20:18:37.130821   39074 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:18:37.134590   39074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:18:37.217285   39074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:18:37.229402   39074 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218 for IP: 192.168.49.2
	I1002 20:18:37.229423   39074 certs.go:195] generating shared ca certs ...
	I1002 20:18:37.229445   39074 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:18:37.229580   39074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:18:37.229612   39074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:18:37.229635   39074 certs.go:257] generating profile certs ...
	I1002 20:18:37.229744   39074 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key
	I1002 20:18:37.229781   39074 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804
	I1002 20:18:37.229820   39074 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key
	I1002 20:18:37.229920   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:18:37.229944   39074 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:18:37.229949   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:18:37.229969   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:18:37.229988   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:18:37.230004   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:18:37.230036   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:18:37.230546   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:18:37.247164   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:18:37.262985   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:18:37.279026   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:18:37.294907   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:18:37.311017   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:18:37.326759   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:18:37.342531   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:18:37.358985   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:18:37.375049   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:18:37.390853   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:18:37.406776   39074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:18:37.418137   39074 ssh_runner.go:195] Run: openssl version
	I1002 20:18:37.423758   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:18:37.431400   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.434759   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.434796   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.469193   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:18:37.476976   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:18:37.484860   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.488438   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.488489   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.521688   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:18:37.529613   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:18:37.537558   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.541046   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.541078   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.574961   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:18:37.582802   39074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:18:37.586377   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:18:37.620185   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:18:37.653623   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:18:37.686983   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:18:37.720317   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:18:37.753617   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:18:37.787371   39074 kubeadm.go:400] StartCluster: {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:37.787431   39074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:18:37.787474   39074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:18:37.813804   39074 cri.go:89] found id: ""
	I1002 20:18:37.813849   39074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:18:37.821398   39074 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:18:37.821423   39074 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:18:37.821468   39074 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:18:37.828438   39074 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.828913   39074 kubeconfig.go:125] found "functional-753218" server: "https://192.168.49.2:8441"
	I1002 20:18:37.830019   39074 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:18:37.837252   39074 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:04:06.241851372 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:18:37.128983250 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:18:37.837272   39074 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:18:37.837284   39074 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:18:37.837326   39074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:18:37.863302   39074 cri.go:89] found id: ""
	I1002 20:18:37.863361   39074 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:18:37.911147   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:18:37.918894   39074 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  2 20:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 20:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:08 /etc/kubernetes/scheduler.conf
	
	I1002 20:18:37.918950   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:18:37.926065   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:18:37.933031   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.933065   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:18:37.939972   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:18:37.946875   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.946911   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:18:37.953620   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:18:37.960544   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.960573   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:18:37.967317   39074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:18:37.974311   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:38.013321   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.074022   39074 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060677583s)
	I1002 20:18:39.074075   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.228791   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.281116   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.328956   39074 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:18:39.329020   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:39.829304   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:40.329782   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:40.830022   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:41.329834   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:41.829218   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:42.329847   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:42.829333   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:43.329809   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:43.829522   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:44.329493   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:44.829576   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:45.329166   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:45.829738   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:46.329491   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:46.829212   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:47.330127   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:47.829175   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:48.329888   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:48.829745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:49.330019   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:49.829990   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:50.330054   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:50.829373   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:51.330102   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:51.829486   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:52.329898   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:52.829160   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:53.329735   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:53.829783   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:54.329822   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:54.829468   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:55.329274   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:55.829515   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:56.329151   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:56.829940   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:57.329721   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:57.829433   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:58.329165   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:58.829113   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:59.329101   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:59.829897   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:00.329742   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:00.829770   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:01.329988   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:01.830082   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:02.329237   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:02.829922   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:03.330132   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:03.829921   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:04.329162   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:04.829124   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:05.329748   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:05.829595   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:06.329426   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:06.829387   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:07.329567   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:07.830080   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:08.329899   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:08.829745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:09.329666   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:09.829758   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:10.329818   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:10.829090   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:11.329880   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:11.829546   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:12.329286   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:12.830050   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:13.329756   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:13.829521   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:14.329346   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:14.829881   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:15.329641   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:15.829463   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:16.329288   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:16.829123   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:17.329880   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:17.829643   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:18.329839   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:18.829576   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:19.329600   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:19.829397   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:20.329443   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:20.829214   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:21.329827   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:21.829216   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:22.329884   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:22.829410   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:23.329734   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:23.829124   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:24.330092   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:24.829862   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:25.329373   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:25.829486   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:26.329987   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:26.829953   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:27.330064   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:27.829775   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:28.329834   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:28.829394   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:29.329185   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:29.829478   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:30.329460   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:30.829312   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:31.330076   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:31.829866   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:32.329434   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:32.829588   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:33.329475   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:33.829203   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:34.329105   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:34.829918   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:35.329741   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:35.829625   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:36.329350   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:36.829147   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:37.329144   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:37.829141   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:38.329884   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:38.829677   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:39.329725   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:39.329777   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:39.355028   39074 cri.go:89] found id: ""
	I1002 20:19:39.355041   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.355048   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:39.355053   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:39.355092   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:39.380001   39074 cri.go:89] found id: ""
	I1002 20:19:39.380017   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.380026   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:39.380031   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:39.380090   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:39.405251   39074 cri.go:89] found id: ""
	I1002 20:19:39.405267   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.405273   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:39.405277   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:39.405321   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:39.430719   39074 cri.go:89] found id: ""
	I1002 20:19:39.430732   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.430739   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:39.430745   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:39.430794   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:39.454916   39074 cri.go:89] found id: ""
	I1002 20:19:39.454929   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.454936   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:39.454940   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:39.454981   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:39.478922   39074 cri.go:89] found id: ""
	I1002 20:19:39.478934   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.478940   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:39.478944   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:39.478983   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:39.503714   39074 cri.go:89] found id: ""
	I1002 20:19:39.503731   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.503739   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:39.503749   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:39.503760   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:39.573887   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:39.573907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:39.585174   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:39.585191   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:39.639301   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:39.632705    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.633125    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.634665    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.635127    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.636638    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:39.632705    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.633125    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.634665    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.635127    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.636638    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:39.639313   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:39.639322   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:39.699438   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:39.699455   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:42.228926   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:42.239185   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:42.239234   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:42.263214   39074 cri.go:89] found id: ""
	I1002 20:19:42.263230   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.263238   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:42.263245   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:42.263288   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:42.286996   39074 cri.go:89] found id: ""
	I1002 20:19:42.287009   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.287014   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:42.287019   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:42.287059   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:42.311539   39074 cri.go:89] found id: ""
	I1002 20:19:42.311555   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.311563   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:42.311568   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:42.311608   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:42.335720   39074 cri.go:89] found id: ""
	I1002 20:19:42.335735   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.335740   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:42.335744   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:42.335789   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:42.359620   39074 cri.go:89] found id: ""
	I1002 20:19:42.359635   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.359642   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:42.359658   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:42.359717   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:42.383670   39074 cri.go:89] found id: ""
	I1002 20:19:42.383684   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.383702   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:42.383708   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:42.383752   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:42.409324   39074 cri.go:89] found id: ""
	I1002 20:19:42.409337   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.409343   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:42.409350   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:42.409358   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:42.463480   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:42.456002    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.456468    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.458629    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.459138    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.460809    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:42.456002    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.456468    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.458629    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.459138    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.460809    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:42.463498   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:42.463508   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:42.522978   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:42.522994   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:42.550529   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:42.550544   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:42.618426   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:42.618446   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:45.130475   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:45.140935   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:45.140984   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:45.166296   39074 cri.go:89] found id: ""
	I1002 20:19:45.166307   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.166313   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:45.166318   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:45.166370   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:45.190669   39074 cri.go:89] found id: ""
	I1002 20:19:45.190684   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.190690   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:45.190694   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:45.190748   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:45.215836   39074 cri.go:89] found id: ""
	I1002 20:19:45.215861   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.215866   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:45.215870   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:45.215911   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:45.240020   39074 cri.go:89] found id: ""
	I1002 20:19:45.240032   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.240037   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:45.240054   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:45.240103   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:45.265411   39074 cri.go:89] found id: ""
	I1002 20:19:45.265424   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.265430   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:45.265434   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:45.265482   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:45.289247   39074 cri.go:89] found id: ""
	I1002 20:19:45.289262   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.289272   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:45.289277   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:45.289327   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:45.313127   39074 cri.go:89] found id: ""
	I1002 20:19:45.313142   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.313149   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:45.313157   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:45.313175   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:45.383170   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:45.383189   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:45.394492   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:45.394506   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:45.448758   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:45.441841    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.442413    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.443998    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.444386    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.445933    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:45.441841    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.442413    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.443998    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.444386    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.445933    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:45.448771   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:45.448780   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:45.512497   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:45.512515   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:48.041482   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:48.051591   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:48.051635   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:48.076424   39074 cri.go:89] found id: ""
	I1002 20:19:48.076441   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.076449   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:48.076454   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:48.076499   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:48.100297   39074 cri.go:89] found id: ""
	I1002 20:19:48.100324   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.100330   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:48.100334   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:48.100378   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:48.124828   39074 cri.go:89] found id: ""
	I1002 20:19:48.124845   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.124854   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:48.124860   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:48.124916   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:48.148977   39074 cri.go:89] found id: ""
	I1002 20:19:48.148991   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.148998   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:48.149002   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:48.149045   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:48.172962   39074 cri.go:89] found id: ""
	I1002 20:19:48.172978   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.172987   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:48.172992   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:48.173078   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:48.196028   39074 cri.go:89] found id: ""
	I1002 20:19:48.196047   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.196056   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:48.196063   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:48.196116   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:48.219489   39074 cri.go:89] found id: ""
	I1002 20:19:48.219506   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.219514   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:48.219524   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:48.219535   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:48.285750   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:48.285767   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:48.296759   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:48.296773   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:48.350552   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:48.343634    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.344266    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.345849    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.346274    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.347827    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:48.343634    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.344266    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.345849    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.346274    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.347827    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:48.350562   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:48.350570   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:48.415152   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:48.415174   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:50.944831   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:50.955007   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:50.955051   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:50.979562   39074 cri.go:89] found id: ""
	I1002 20:19:50.979574   39074 logs.go:282] 0 containers: []
	W1002 20:19:50.979580   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:50.979586   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:50.979626   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:51.005726   39074 cri.go:89] found id: ""
	I1002 20:19:51.005738   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.005744   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:51.005748   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:51.005789   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:51.029734   39074 cri.go:89] found id: ""
	I1002 20:19:51.029751   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.029760   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:51.029766   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:51.029810   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:51.053889   39074 cri.go:89] found id: ""
	I1002 20:19:51.053904   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.053912   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:51.053918   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:51.053970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:51.080377   39074 cri.go:89] found id: ""
	I1002 20:19:51.080389   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.080394   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:51.080399   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:51.080438   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:51.105307   39074 cri.go:89] found id: ""
	I1002 20:19:51.105321   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.105326   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:51.105331   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:51.105371   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:51.130666   39074 cri.go:89] found id: ""
	I1002 20:19:51.130682   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.130689   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:51.130700   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:51.130710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:51.141518   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:51.141533   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:51.194182   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:51.187772    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.188306    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.189890    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.190325    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.191812    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:51.187772    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.188306    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.189890    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.190325    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.191812    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:51.194195   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:51.194204   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:51.253875   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:51.253894   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:51.281673   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:51.281693   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:53.847012   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:53.857350   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:53.857394   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:53.882278   39074 cri.go:89] found id: ""
	I1002 20:19:53.882291   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.882297   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:53.882309   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:53.882351   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:53.906222   39074 cri.go:89] found id: ""
	I1002 20:19:53.906235   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.906241   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:53.906245   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:53.906294   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:53.930975   39074 cri.go:89] found id: ""
	I1002 20:19:53.930988   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.930995   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:53.930999   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:53.931045   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:53.957875   39074 cri.go:89] found id: ""
	I1002 20:19:53.957891   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.957901   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:53.957907   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:53.958019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:53.982116   39074 cri.go:89] found id: ""
	I1002 20:19:53.982129   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.982135   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:53.982140   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:53.982181   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:54.006296   39074 cri.go:89] found id: ""
	I1002 20:19:54.006310   39074 logs.go:282] 0 containers: []
	W1002 20:19:54.006316   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:54.006320   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:54.006360   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:54.031088   39074 cri.go:89] found id: ""
	I1002 20:19:54.031102   39074 logs.go:282] 0 containers: []
	W1002 20:19:54.031108   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:54.031116   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:54.031125   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:54.041909   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:54.041951   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:54.095399   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:54.088843    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.089263    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.090810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.091232    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.092782    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:54.088843    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.089263    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.090810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.091232    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.092782    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:54.095411   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:54.095438   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:54.159991   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:54.160010   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:54.187642   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:54.187676   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:56.757287   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:56.768252   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:56.768293   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:56.793773   39074 cri.go:89] found id: ""
	I1002 20:19:56.793785   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.793791   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:56.793796   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:56.793841   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:56.819484   39074 cri.go:89] found id: ""
	I1002 20:19:56.819499   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.819509   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:56.819516   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:56.819558   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:56.844773   39074 cri.go:89] found id: ""
	I1002 20:19:56.844787   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.844793   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:56.844798   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:56.844838   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:56.869847   39074 cri.go:89] found id: ""
	I1002 20:19:56.869888   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.869898   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:56.869906   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:56.869956   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:56.894519   39074 cri.go:89] found id: ""
	I1002 20:19:56.894537   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.894545   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:56.894553   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:56.894613   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:56.920670   39074 cri.go:89] found id: ""
	I1002 20:19:56.920689   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.920698   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:56.920706   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:56.920758   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:56.945515   39074 cri.go:89] found id: ""
	I1002 20:19:56.945529   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.945535   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:56.945543   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:56.945557   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:57.001311   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:56.994723    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.995244    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.996779    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.997235    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.998722    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:56.994723    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.995244    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.996779    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.997235    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.998722    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:57.001323   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:57.001332   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:57.065838   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:57.065856   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:57.093387   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:57.093401   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:57.161709   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:57.161730   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:59.673972   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:59.684279   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:59.684321   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:59.708892   39074 cri.go:89] found id: ""
	I1002 20:19:59.708905   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.708911   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:59.708915   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:59.708958   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:59.733806   39074 cri.go:89] found id: ""
	I1002 20:19:59.733821   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.733828   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:59.733834   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:59.733886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:59.758895   39074 cri.go:89] found id: ""
	I1002 20:19:59.758907   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.758913   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:59.758918   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:59.758970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:59.782140   39074 cri.go:89] found id: ""
	I1002 20:19:59.782154   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.782161   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:59.782166   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:59.782211   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:59.806783   39074 cri.go:89] found id: ""
	I1002 20:19:59.806797   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.806803   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:59.806808   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:59.806851   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:59.831636   39074 cri.go:89] found id: ""
	I1002 20:19:59.831663   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.831673   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:59.831679   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:59.831725   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:59.855094   39074 cri.go:89] found id: ""
	I1002 20:19:59.855110   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.855119   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:59.855128   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:59.855139   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:59.916579   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:59.916598   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:59.944216   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:59.944230   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:00.010694   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:00.010712   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:00.021993   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:00.022008   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:00.076257   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:00.069139    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.069711    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071246    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071701    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.073412    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:00.069139    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.069711    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071246    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071701    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.073412    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:02.577956   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:02.588476   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:02.588521   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:02.612197   39074 cri.go:89] found id: ""
	I1002 20:20:02.612213   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.612224   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:02.612231   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:02.612283   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:02.636711   39074 cri.go:89] found id: ""
	I1002 20:20:02.636727   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.636737   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:02.636743   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:02.636797   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:02.660364   39074 cri.go:89] found id: ""
	I1002 20:20:02.660380   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.660389   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:02.660396   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:02.660448   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:02.684665   39074 cri.go:89] found id: ""
	I1002 20:20:02.684682   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.684689   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:02.684694   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:02.684739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:02.710226   39074 cri.go:89] found id: ""
	I1002 20:20:02.710239   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.710247   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:02.710254   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:02.710308   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:02.735247   39074 cri.go:89] found id: ""
	I1002 20:20:02.735262   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.735271   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:02.735278   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:02.735328   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:02.760072   39074 cri.go:89] found id: ""
	I1002 20:20:02.760085   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.760091   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:02.760098   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:02.760106   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:02.824182   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:02.824200   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:02.835284   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:02.835297   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:02.888320   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:02.881490    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.881999    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883536    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883961    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.885446    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:02.881490    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.881999    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883536    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883961    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.885446    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:02.888330   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:02.888339   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:02.952125   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:02.952145   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:05.481086   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:05.491660   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:05.491723   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:05.517036   39074 cri.go:89] found id: ""
	I1002 20:20:05.517052   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.517060   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:05.517067   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:05.517114   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:05.542299   39074 cri.go:89] found id: ""
	I1002 20:20:05.542312   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.542320   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:05.542326   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:05.542387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:05.567213   39074 cri.go:89] found id: ""
	I1002 20:20:05.567227   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.567233   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:05.567238   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:05.567286   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:05.590782   39074 cri.go:89] found id: ""
	I1002 20:20:05.590795   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.590801   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:05.590807   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:05.590850   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:05.615825   39074 cri.go:89] found id: ""
	I1002 20:20:05.615837   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.615843   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:05.615849   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:05.615886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:05.640124   39074 cri.go:89] found id: ""
	I1002 20:20:05.640137   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.640143   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:05.640148   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:05.640191   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:05.664435   39074 cri.go:89] found id: ""
	I1002 20:20:05.664451   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.664460   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:05.664469   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:05.664478   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:05.675270   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:05.675284   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:05.728958   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:05.722310    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.722829    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724378    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724835    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.726322    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:05.722310    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.722829    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724378    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724835    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.726322    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:05.728968   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:05.728977   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:05.789744   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:05.789763   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:05.816871   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:05.816886   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:08.386603   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:08.396838   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:08.396887   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:08.421504   39074 cri.go:89] found id: ""
	I1002 20:20:08.421516   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.421526   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:08.421531   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:08.421573   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:08.445525   39074 cri.go:89] found id: ""
	I1002 20:20:08.445539   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.445551   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:08.445557   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:08.445611   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:08.473912   39074 cri.go:89] found id: ""
	I1002 20:20:08.473926   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.473932   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:08.473937   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:08.473977   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:08.498551   39074 cri.go:89] found id: ""
	I1002 20:20:08.498567   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.498575   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:08.498579   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:08.498619   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:08.522969   39074 cri.go:89] found id: ""
	I1002 20:20:08.522985   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.522991   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:08.522996   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:08.523041   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:08.546557   39074 cri.go:89] found id: ""
	I1002 20:20:08.546572   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.546579   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:08.546583   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:08.546628   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:08.570570   39074 cri.go:89] found id: ""
	I1002 20:20:08.570586   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.570595   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:08.570605   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:08.570619   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:08.639672   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:08.639691   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:08.651327   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:08.651345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:08.704679   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:08.698150    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.698634    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700211    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700630    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.702066    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:08.698150    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.698634    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700211    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700630    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.702066    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:08.704698   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:08.704710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:08.767857   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:08.767876   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:11.297723   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:11.307921   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:11.307963   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:11.337544   39074 cri.go:89] found id: ""
	I1002 20:20:11.337560   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.337577   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:11.337584   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:11.337640   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:11.363291   39074 cri.go:89] found id: ""
	I1002 20:20:11.363306   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.363315   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:11.363325   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:11.363366   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:11.387886   39074 cri.go:89] found id: ""
	I1002 20:20:11.387905   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.387915   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:11.387922   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:11.387972   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:11.412550   39074 cri.go:89] found id: ""
	I1002 20:20:11.412565   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.412573   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:11.412579   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:11.412677   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:11.437380   39074 cri.go:89] found id: ""
	I1002 20:20:11.437396   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.437405   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:11.437411   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:11.437452   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:11.461402   39074 cri.go:89] found id: ""
	I1002 20:20:11.461415   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.461421   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:11.461426   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:11.461471   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:11.486814   39074 cri.go:89] found id: ""
	I1002 20:20:11.486828   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.486833   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:11.486840   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:11.486848   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:11.497776   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:11.497791   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:11.552252   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:11.545574    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.546102    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.547700    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.548151    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.549707    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:11.545574    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.546102    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.547700    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.548151    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.549707    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:11.552263   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:11.552278   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:11.614501   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:11.614519   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:11.641975   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:11.641990   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:14.212363   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:14.223339   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:14.223387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:14.247765   39074 cri.go:89] found id: ""
	I1002 20:20:14.247782   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.247790   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:14.247796   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:14.247850   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:14.272207   39074 cri.go:89] found id: ""
	I1002 20:20:14.272223   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.272230   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:14.272235   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:14.272275   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:14.296884   39074 cri.go:89] found id: ""
	I1002 20:20:14.296896   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.296901   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:14.296906   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:14.296953   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:14.322400   39074 cri.go:89] found id: ""
	I1002 20:20:14.322416   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.322424   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:14.322430   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:14.322483   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:14.348457   39074 cri.go:89] found id: ""
	I1002 20:20:14.348474   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.348482   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:14.348488   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:14.348529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:14.371846   39074 cri.go:89] found id: ""
	I1002 20:20:14.371859   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.371866   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:14.371870   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:14.371910   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:14.396739   39074 cri.go:89] found id: ""
	I1002 20:20:14.396757   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.396765   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:14.396775   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:14.396785   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:14.461682   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:14.461703   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:14.473125   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:14.473138   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:14.527220   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:14.520100    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.520639    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522150    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522547    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.524758    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:14.520100    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.520639    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522150    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522547    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.524758    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:14.527230   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:14.527243   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:14.587080   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:14.587097   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:17.117171   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:17.127800   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:17.127860   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:17.153825   39074 cri.go:89] found id: ""
	I1002 20:20:17.153838   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.153845   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:17.153850   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:17.153890   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:17.179191   39074 cri.go:89] found id: ""
	I1002 20:20:17.179208   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.179218   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:17.179225   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:17.179283   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:17.203643   39074 cri.go:89] found id: ""
	I1002 20:20:17.203670   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.203677   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:17.203682   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:17.203729   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:17.228485   39074 cri.go:89] found id: ""
	I1002 20:20:17.228500   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.228509   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:17.228513   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:17.228552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:17.254499   39074 cri.go:89] found id: ""
	I1002 20:20:17.254513   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.254519   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:17.254524   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:17.254568   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:17.280943   39074 cri.go:89] found id: ""
	I1002 20:20:17.280959   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.280968   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:17.280975   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:17.281022   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:17.306591   39074 cri.go:89] found id: ""
	I1002 20:20:17.306607   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.306615   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:17.306624   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:17.306638   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:17.365595   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:17.358275    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359542    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359993    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.361559    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.362067    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:17.358275    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359542    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359993    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.361559    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.362067    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:17.365605   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:17.365615   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:17.428722   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:17.428741   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:17.456720   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:17.456736   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:17.526400   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:17.526419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:20.038675   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:20.049608   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:20.049670   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:20.075162   39074 cri.go:89] found id: ""
	I1002 20:20:20.075178   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.075193   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:20.075200   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:20.075244   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:20.100714   39074 cri.go:89] found id: ""
	I1002 20:20:20.100730   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.100739   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:20.100745   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:20.100796   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:20.125515   39074 cri.go:89] found id: ""
	I1002 20:20:20.125530   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.125536   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:20.125541   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:20.125590   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:20.150152   39074 cri.go:89] found id: ""
	I1002 20:20:20.150166   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.150172   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:20.150176   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:20.150219   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:20.174386   39074 cri.go:89] found id: ""
	I1002 20:20:20.174400   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.174405   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:20.174410   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:20.174451   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:20.198954   39074 cri.go:89] found id: ""
	I1002 20:20:20.198967   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.198974   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:20.198978   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:20.199019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:20.223494   39074 cri.go:89] found id: ""
	I1002 20:20:20.223506   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.223512   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:20.223520   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:20.223530   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:20.234227   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:20.234242   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:20.287508   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:20.281135    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.281556    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283225    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283624    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.285109    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:20.281135    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.281556    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283225    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283624    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.285109    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:20.287521   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:20.287530   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:20.353299   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:20.353316   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:20.381247   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:20.381264   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:22.948641   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:22.958867   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:22.958923   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:22.982867   39074 cri.go:89] found id: ""
	I1002 20:20:22.982888   39074 logs.go:282] 0 containers: []
	W1002 20:20:22.982896   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:22.982905   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:22.982963   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:23.008002   39074 cri.go:89] found id: ""
	I1002 20:20:23.008019   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.008025   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:23.008031   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:23.008102   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:23.032729   39074 cri.go:89] found id: ""
	I1002 20:20:23.032745   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.032755   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:23.032761   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:23.032805   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:23.057489   39074 cri.go:89] found id: ""
	I1002 20:20:23.057506   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.057513   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:23.057520   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:23.057574   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:23.082449   39074 cri.go:89] found id: ""
	I1002 20:20:23.082465   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.082473   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:23.082480   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:23.082533   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:23.106284   39074 cri.go:89] found id: ""
	I1002 20:20:23.106300   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.106308   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:23.106314   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:23.106356   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:23.131674   39074 cri.go:89] found id: ""
	I1002 20:20:23.131689   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.131698   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:23.131708   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:23.131719   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:23.202584   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:23.202606   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:23.213553   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:23.213567   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:23.267093   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:23.260296    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.260752    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262302    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262721    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.264215    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:23.260296    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.260752    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262302    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262721    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.264215    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:23.267107   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:23.267117   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:23.330039   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:23.330057   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:25.859757   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:25.870050   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:25.870094   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:25.893890   39074 cri.go:89] found id: ""
	I1002 20:20:25.893903   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.893909   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:25.893913   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:25.893962   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:25.918711   39074 cri.go:89] found id: ""
	I1002 20:20:25.918724   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.918731   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:25.918740   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:25.918790   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:25.943028   39074 cri.go:89] found id: ""
	I1002 20:20:25.943040   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.943046   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:25.943050   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:25.943100   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:25.968555   39074 cri.go:89] found id: ""
	I1002 20:20:25.968569   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.968575   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:25.968580   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:25.968630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:25.993321   39074 cri.go:89] found id: ""
	I1002 20:20:25.993334   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.993340   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:25.993344   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:25.993393   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:26.017729   39074 cri.go:89] found id: ""
	I1002 20:20:26.017755   39074 logs.go:282] 0 containers: []
	W1002 20:20:26.017761   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:26.017766   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:26.017807   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:26.042867   39074 cri.go:89] found id: ""
	I1002 20:20:26.042879   39074 logs.go:282] 0 containers: []
	W1002 20:20:26.042885   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:26.042892   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:26.042900   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:26.109498   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:26.109517   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:26.120700   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:26.120715   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:26.174158   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:26.167675    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.168158    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.169684    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.170006    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.171555    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:26.167675    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.168158    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.169684    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.170006    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.171555    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:26.174169   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:26.174177   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:26.232801   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:26.232820   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:28.760440   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:28.770974   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:28.771015   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:28.795071   39074 cri.go:89] found id: ""
	I1002 20:20:28.795084   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.795089   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:28.795094   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:28.795137   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:28.820101   39074 cri.go:89] found id: ""
	I1002 20:20:28.820114   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.820120   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:28.820125   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:28.820174   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:28.844954   39074 cri.go:89] found id: ""
	I1002 20:20:28.844967   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.844974   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:28.844978   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:28.845021   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:28.869971   39074 cri.go:89] found id: ""
	I1002 20:20:28.869984   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.869991   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:28.869996   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:28.870035   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:28.894419   39074 cri.go:89] found id: ""
	I1002 20:20:28.894434   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.894443   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:28.894454   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:28.894497   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:28.919785   39074 cri.go:89] found id: ""
	I1002 20:20:28.919798   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.919804   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:28.919808   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:28.919847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:28.945626   39074 cri.go:89] found id: ""
	I1002 20:20:28.945644   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.945666   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:28.945676   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:28.945688   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:29.013406   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:29.013424   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:29.024733   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:29.024749   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:29.079492   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:29.073004    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.073547    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075195    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075620    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.077061    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:29.073004    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.073547    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075195    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075620    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.077061    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:29.079501   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:29.079510   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:29.143375   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:29.143393   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:31.673342   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:31.683685   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:31.683744   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:31.708355   39074 cri.go:89] found id: ""
	I1002 20:20:31.708368   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.708374   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:31.708378   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:31.708426   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:31.732066   39074 cri.go:89] found id: ""
	I1002 20:20:31.732080   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.732085   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:31.732090   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:31.732128   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:31.756955   39074 cri.go:89] found id: ""
	I1002 20:20:31.756968   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.756975   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:31.756981   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:31.757031   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:31.783141   39074 cri.go:89] found id: ""
	I1002 20:20:31.783157   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.783163   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:31.783168   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:31.783209   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:31.807678   39074 cri.go:89] found id: ""
	I1002 20:20:31.807691   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.807698   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:31.807703   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:31.807745   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:31.831482   39074 cri.go:89] found id: ""
	I1002 20:20:31.831494   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.831500   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:31.831504   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:31.831548   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:31.855667   39074 cri.go:89] found id: ""
	I1002 20:20:31.855683   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.855692   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:31.855700   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:31.855710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:31.882380   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:31.882395   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:31.947814   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:31.947838   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:31.958919   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:31.958934   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:32.013721   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:32.006971    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.007473    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009037    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009432    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.010967    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:32.006971    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.007473    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009037    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009432    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.010967    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:32.013731   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:32.013742   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:34.575751   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:34.585980   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:34.586030   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:34.610997   39074 cri.go:89] found id: ""
	I1002 20:20:34.611013   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.611019   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:34.611024   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:34.611076   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:34.635375   39074 cri.go:89] found id: ""
	I1002 20:20:34.635388   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.635394   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:34.635401   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:34.635449   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:34.659513   39074 cri.go:89] found id: ""
	I1002 20:20:34.659526   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.659532   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:34.659536   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:34.659584   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:34.683614   39074 cri.go:89] found id: ""
	I1002 20:20:34.683628   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.683634   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:34.683638   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:34.683709   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:34.707536   39074 cri.go:89] found id: ""
	I1002 20:20:34.707548   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.707554   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:34.707558   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:34.707606   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:34.730813   39074 cri.go:89] found id: ""
	I1002 20:20:34.730829   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.730838   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:34.730844   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:34.730886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:34.756746   39074 cri.go:89] found id: ""
	I1002 20:20:34.756758   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.756763   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:34.756770   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:34.756779   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:34.823845   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:34.823864   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:34.834944   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:34.834959   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:34.889016   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:34.882235    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.882739    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884456    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884966    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.886550    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:34.882235    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.882739    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884456    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884966    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.886550    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:34.889027   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:34.889039   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:34.952102   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:34.952120   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:37.482142   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:37.492739   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:37.492783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:37.518265   39074 cri.go:89] found id: ""
	I1002 20:20:37.518279   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.518285   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:37.518290   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:37.518332   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:37.544309   39074 cri.go:89] found id: ""
	I1002 20:20:37.544322   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.544327   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:37.544332   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:37.544371   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:37.568928   39074 cri.go:89] found id: ""
	I1002 20:20:37.568947   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.568955   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:37.568960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:37.569000   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:37.593112   39074 cri.go:89] found id: ""
	I1002 20:20:37.593125   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.593131   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:37.593135   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:37.593175   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:37.617378   39074 cri.go:89] found id: ""
	I1002 20:20:37.617393   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.617399   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:37.617404   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:37.617446   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:37.641497   39074 cri.go:89] found id: ""
	I1002 20:20:37.641509   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.641514   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:37.641519   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:37.641560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:37.665025   39074 cri.go:89] found id: ""
	I1002 20:20:37.665037   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.665043   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:37.665050   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:37.665059   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:37.729867   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:37.729886   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:37.741144   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:37.741161   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:37.794545   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:37.788104    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.788618    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790124    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790578    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.791673    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:37.788104    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.788618    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790124    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790578    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.791673    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:37.794554   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:37.794563   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:37.858517   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:37.858537   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:40.387221   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:40.397406   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:40.397456   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:40.422226   39074 cri.go:89] found id: ""
	I1002 20:20:40.422241   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.422249   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:40.422256   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:40.422312   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:40.448898   39074 cri.go:89] found id: ""
	I1002 20:20:40.448914   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.448922   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:40.448928   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:40.448970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:40.473866   39074 cri.go:89] found id: ""
	I1002 20:20:40.473883   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.473891   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:40.473898   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:40.473940   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:40.499789   39074 cri.go:89] found id: ""
	I1002 20:20:40.499804   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.499820   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:40.499827   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:40.499870   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:40.524055   39074 cri.go:89] found id: ""
	I1002 20:20:40.524070   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.524078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:40.524084   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:40.524131   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:40.549681   39074 cri.go:89] found id: ""
	I1002 20:20:40.549697   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.549705   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:40.549709   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:40.549751   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:40.574534   39074 cri.go:89] found id: ""
	I1002 20:20:40.574551   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.574559   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:40.574568   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:40.574585   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:40.585332   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:40.585345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:40.639552   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:40.632883    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.633368    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.634904    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.635294    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.636825    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:40.632883    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.633368    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.634904    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.635294    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.636825    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:40.639561   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:40.639570   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:40.703074   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:40.703093   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:40.731458   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:40.731471   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:43.302779   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:43.313194   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:43.313249   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:43.340348   39074 cri.go:89] found id: ""
	I1002 20:20:43.340361   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.340367   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:43.340372   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:43.340416   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:43.365438   39074 cri.go:89] found id: ""
	I1002 20:20:43.365453   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.365461   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:43.365467   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:43.365530   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:43.392295   39074 cri.go:89] found id: ""
	I1002 20:20:43.392308   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.392314   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:43.392319   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:43.392358   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:43.417313   39074 cri.go:89] found id: ""
	I1002 20:20:43.417326   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.417332   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:43.417336   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:43.417381   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:43.441890   39074 cri.go:89] found id: ""
	I1002 20:20:43.441907   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.441913   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:43.441917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:43.441959   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:43.467410   39074 cri.go:89] found id: ""
	I1002 20:20:43.467427   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.467438   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:43.467444   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:43.467501   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:43.492142   39074 cri.go:89] found id: ""
	I1002 20:20:43.492154   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.492160   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:43.492168   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:43.492178   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:43.520876   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:43.520907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:43.586242   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:43.586258   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:43.597341   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:43.597355   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:43.651087   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:43.644558    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.645043    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.646588    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.647003    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.648460    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:43.644558    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.645043    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.646588    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.647003    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.648460    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:43.651098   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:43.651112   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:46.210362   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:46.220658   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:46.220710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:46.245577   39074 cri.go:89] found id: ""
	I1002 20:20:46.245591   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.245597   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:46.245601   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:46.245641   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:46.270950   39074 cri.go:89] found id: ""
	I1002 20:20:46.270965   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.270974   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:46.270979   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:46.271024   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:46.295887   39074 cri.go:89] found id: ""
	I1002 20:20:46.295903   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.295911   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:46.295917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:46.295969   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:46.321705   39074 cri.go:89] found id: ""
	I1002 20:20:46.321721   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.321730   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:46.321736   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:46.321785   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:46.348811   39074 cri.go:89] found id: ""
	I1002 20:20:46.348827   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.348836   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:46.348842   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:46.348900   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:46.373477   39074 cri.go:89] found id: ""
	I1002 20:20:46.373493   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.373502   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:46.373508   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:46.373552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:46.398884   39074 cri.go:89] found id: ""
	I1002 20:20:46.398900   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.398908   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:46.398917   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:46.398926   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:46.463113   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:46.463131   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:46.474566   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:46.474578   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:46.529468   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:46.522633    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.523203    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.524813    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.525199    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.526736    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:46.522633    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.523203    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.524813    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.525199    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.526736    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:46.529479   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:46.529489   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:46.590223   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:46.590241   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:49.118745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:49.128971   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:49.129012   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:49.155632   39074 cri.go:89] found id: ""
	I1002 20:20:49.155662   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.155683   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:49.155689   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:49.155734   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:49.180611   39074 cri.go:89] found id: ""
	I1002 20:20:49.180629   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.180635   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:49.180639   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:49.180703   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:49.206534   39074 cri.go:89] found id: ""
	I1002 20:20:49.206557   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.206563   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:49.206568   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:49.206617   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:49.231608   39074 cri.go:89] found id: ""
	I1002 20:20:49.231625   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.231633   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:49.231641   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:49.231713   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:49.256407   39074 cri.go:89] found id: ""
	I1002 20:20:49.256426   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.256433   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:49.256439   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:49.256490   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:49.281494   39074 cri.go:89] found id: ""
	I1002 20:20:49.281509   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.281517   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:49.281524   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:49.281571   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:49.306502   39074 cri.go:89] found id: ""
	I1002 20:20:49.306518   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.306526   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:49.306534   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:49.306543   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:49.374386   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:49.374408   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:49.385910   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:49.385928   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:49.440525   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:49.433626    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.434180    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.435811    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.436224    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.437741    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:49.433626    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.434180    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.435811    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.436224    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.437741    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:49.440537   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:49.440549   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:49.501317   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:49.501334   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:52.031253   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:52.041701   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:52.041754   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:52.066302   39074 cri.go:89] found id: ""
	I1002 20:20:52.066315   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.066321   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:52.066325   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:52.066375   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:52.091575   39074 cri.go:89] found id: ""
	I1002 20:20:52.091591   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.091600   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:52.091606   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:52.091674   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:52.115838   39074 cri.go:89] found id: ""
	I1002 20:20:52.115854   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.115861   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:52.115867   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:52.115914   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:52.141387   39074 cri.go:89] found id: ""
	I1002 20:20:52.141402   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.141412   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:52.141417   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:52.141460   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:52.166810   39074 cri.go:89] found id: ""
	I1002 20:20:52.166823   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.166828   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:52.166832   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:52.166872   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:52.192399   39074 cri.go:89] found id: ""
	I1002 20:20:52.192413   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.192420   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:52.192425   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:52.192473   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:52.217364   39074 cri.go:89] found id: ""
	I1002 20:20:52.217378   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.217385   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:52.217391   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:52.217401   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:52.272135   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:52.265457    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.266093    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.267566    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.268058    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.269531    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:52.265457    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.266093    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.267566    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.268058    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.269531    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:52.272144   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:52.272152   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:52.334330   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:52.334352   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:52.364500   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:52.364514   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:52.427683   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:52.427702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:54.939454   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:54.950121   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:54.950174   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:54.975667   39074 cri.go:89] found id: ""
	I1002 20:20:54.975683   39074 logs.go:282] 0 containers: []
	W1002 20:20:54.975692   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:54.975697   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:54.975739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:55.000676   39074 cri.go:89] found id: ""
	I1002 20:20:55.000692   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.000702   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:55.000711   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:55.000772   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:55.025484   39074 cri.go:89] found id: ""
	I1002 20:20:55.025499   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.025509   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:55.025516   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:55.025570   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:55.050548   39074 cri.go:89] found id: ""
	I1002 20:20:55.050562   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.050570   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:55.050576   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:55.050623   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:55.075593   39074 cri.go:89] found id: ""
	I1002 20:20:55.075608   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.075613   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:55.075618   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:55.075683   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:55.100182   39074 cri.go:89] found id: ""
	I1002 20:20:55.100196   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.100202   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:55.100206   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:55.100245   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:55.125869   39074 cri.go:89] found id: ""
	I1002 20:20:55.125883   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.125890   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:55.125898   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:55.125907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:55.194871   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:55.194894   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:55.206048   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:55.206063   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:55.259703   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:55.253143    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.253642    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255145    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255538    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.257050    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:55.253143    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.253642    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255145    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255538    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.257050    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:55.259714   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:55.259723   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:55.319375   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:55.319393   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:57.847993   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:57.858498   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:57.858550   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:57.881390   39074 cri.go:89] found id: ""
	I1002 20:20:57.881404   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.881412   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:57.881416   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:57.881460   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:57.905251   39074 cri.go:89] found id: ""
	I1002 20:20:57.905267   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.905274   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:57.905279   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:57.905318   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:57.931213   39074 cri.go:89] found id: ""
	I1002 20:20:57.931226   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.931233   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:57.931238   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:57.931280   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:57.954527   39074 cri.go:89] found id: ""
	I1002 20:20:57.954544   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.954558   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:57.954564   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:57.954604   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:57.978788   39074 cri.go:89] found id: ""
	I1002 20:20:57.978801   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.978807   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:57.978811   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:57.978861   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:58.004052   39074 cri.go:89] found id: ""
	I1002 20:20:58.004067   39074 logs.go:282] 0 containers: []
	W1002 20:20:58.004075   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:58.004082   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:58.004123   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:58.028322   39074 cri.go:89] found id: ""
	I1002 20:20:58.028335   39074 logs.go:282] 0 containers: []
	W1002 20:20:58.028341   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:58.028348   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:58.028357   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:58.094257   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:58.094275   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:58.105903   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:58.105918   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:58.160072   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:58.153230   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.153795   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155325   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155732   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.157257   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:58.153230   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.153795   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155325   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155732   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.157257   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:58.160081   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:58.160090   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:58.219413   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:58.219430   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:00.748760   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:00.759397   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:00.759452   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:00.783722   39074 cri.go:89] found id: ""
	I1002 20:21:00.783738   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.783747   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:00.783755   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:00.783811   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:00.808536   39074 cri.go:89] found id: ""
	I1002 20:21:00.808552   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.808560   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:00.808565   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:00.808619   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:00.833822   39074 cri.go:89] found id: ""
	I1002 20:21:00.833839   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.833846   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:00.833850   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:00.833893   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:00.857297   39074 cri.go:89] found id: ""
	I1002 20:21:00.857311   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.857317   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:00.857322   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:00.857372   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:00.882563   39074 cri.go:89] found id: ""
	I1002 20:21:00.882578   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.882586   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:00.882592   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:00.882664   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:00.907673   39074 cri.go:89] found id: ""
	I1002 20:21:00.907689   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.907698   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:00.907704   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:00.907746   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:00.932133   39074 cri.go:89] found id: ""
	I1002 20:21:00.932148   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.932156   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:00.932165   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:00.932179   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:01.000177   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:01.000198   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:01.012252   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:01.012267   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:01.068351   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:01.061526   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.062112   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.063638   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.064089   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.065590   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:01.061526   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.062112   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.063638   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.064089   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.065590   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:01.068361   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:01.068370   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:01.128987   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:01.129007   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:03.659911   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:03.670393   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:03.670439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:03.695784   39074 cri.go:89] found id: ""
	I1002 20:21:03.695796   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.695802   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:03.695806   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:03.695846   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:03.720085   39074 cri.go:89] found id: ""
	I1002 20:21:03.720098   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.720104   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:03.720109   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:03.720150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:03.745925   39074 cri.go:89] found id: ""
	I1002 20:21:03.745940   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.745950   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:03.745958   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:03.745996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:03.770616   39074 cri.go:89] found id: ""
	I1002 20:21:03.770632   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.770639   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:03.770655   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:03.770711   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:03.793953   39074 cri.go:89] found id: ""
	I1002 20:21:03.793969   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.793977   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:03.793982   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:03.794028   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:03.818909   39074 cri.go:89] found id: ""
	I1002 20:21:03.818925   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.818933   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:03.818940   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:03.818996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:03.843200   39074 cri.go:89] found id: ""
	I1002 20:21:03.843213   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.843219   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:03.843228   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:03.843237   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:03.901520   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:03.901537   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:03.929305   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:03.929319   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:03.993117   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:03.993134   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:04.004664   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:04.004678   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:04.058624   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:04.051963   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.052457   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.053947   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.054366   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.055857   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:04.051963   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.052457   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.053947   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.054366   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.055857   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:06.560322   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:06.570866   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:06.570909   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:06.594524   39074 cri.go:89] found id: ""
	I1002 20:21:06.594536   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.594542   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:06.594547   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:06.594586   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:06.619717   39074 cri.go:89] found id: ""
	I1002 20:21:06.619730   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.619741   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:06.619747   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:06.619787   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:06.643975   39074 cri.go:89] found id: ""
	I1002 20:21:06.643989   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.643994   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:06.643999   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:06.644051   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:06.667642   39074 cri.go:89] found id: ""
	I1002 20:21:06.667674   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.667683   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:06.667690   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:06.667735   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:06.692383   39074 cri.go:89] found id: ""
	I1002 20:21:06.692398   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.692406   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:06.692411   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:06.692459   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:06.716132   39074 cri.go:89] found id: ""
	I1002 20:21:06.716148   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.716157   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:06.716162   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:06.716206   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:06.740781   39074 cri.go:89] found id: ""
	I1002 20:21:06.740794   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.740800   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:06.740809   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:06.740817   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:06.809048   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:06.809064   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:06.820121   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:06.820134   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:06.873477   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:06.866935   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.867506   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869037   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869480   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.870947   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:06.866935   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.867506   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869037   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869480   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.870947   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:06.873489   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:06.873503   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:06.932869   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:06.932885   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:09.461200   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:09.471453   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:09.471494   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:09.495052   39074 cri.go:89] found id: ""
	I1002 20:21:09.495076   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.495083   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:09.495090   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:09.495142   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:09.520680   39074 cri.go:89] found id: ""
	I1002 20:21:09.520694   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.520699   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:09.520704   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:09.520745   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:09.544279   39074 cri.go:89] found id: ""
	I1002 20:21:09.544292   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.544300   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:09.544305   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:09.544343   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:09.568552   39074 cri.go:89] found id: ""
	I1002 20:21:09.568564   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.568570   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:09.568575   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:09.568636   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:09.593483   39074 cri.go:89] found id: ""
	I1002 20:21:09.593496   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.593504   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:09.593509   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:09.593548   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:09.618504   39074 cri.go:89] found id: ""
	I1002 20:21:09.618518   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.618524   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:09.618529   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:09.618568   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:09.644028   39074 cri.go:89] found id: ""
	I1002 20:21:09.644040   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.644046   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:09.644054   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:09.644068   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:09.709968   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:09.709989   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:09.721282   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:09.721295   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:09.774963   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:09.768383   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.768943   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770534   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770976   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.772525   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:09.768383   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.768943   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770534   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770976   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.772525   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:09.774974   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:09.774985   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:09.833762   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:09.833780   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:12.362468   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:12.372596   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:12.372637   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:12.398178   39074 cri.go:89] found id: ""
	I1002 20:21:12.398193   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.398202   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:12.398208   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:12.398255   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:12.422734   39074 cri.go:89] found id: ""
	I1002 20:21:12.422751   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.422759   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:12.422764   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:12.422806   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:12.446773   39074 cri.go:89] found id: ""
	I1002 20:21:12.446791   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.446799   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:12.446806   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:12.446847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:12.470795   39074 cri.go:89] found id: ""
	I1002 20:21:12.470808   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.470815   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:12.470819   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:12.470858   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:12.494783   39074 cri.go:89] found id: ""
	I1002 20:21:12.494796   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.494801   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:12.494805   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:12.494845   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:12.518163   39074 cri.go:89] found id: ""
	I1002 20:21:12.518177   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.518182   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:12.518187   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:12.518226   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:12.542626   39074 cri.go:89] found id: ""
	I1002 20:21:12.542638   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.542643   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:12.542663   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:12.542679   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:12.553111   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:12.553122   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:12.607093   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:12.600525   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.601040   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602535   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602952   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.604425   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:12.600525   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.601040   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602535   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602952   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.604425   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:12.607103   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:12.607112   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:12.666819   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:12.666837   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:12.694057   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:12.694071   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:15.261212   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:15.271321   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:15.271362   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:15.296775   39074 cri.go:89] found id: ""
	I1002 20:21:15.296788   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.296795   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:15.296799   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:15.296841   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:15.320931   39074 cri.go:89] found id: ""
	I1002 20:21:15.320944   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.320950   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:15.320954   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:15.320996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:15.344685   39074 cri.go:89] found id: ""
	I1002 20:21:15.344698   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.344704   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:15.344709   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:15.344748   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:15.368513   39074 cri.go:89] found id: ""
	I1002 20:21:15.368527   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.368534   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:15.368538   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:15.368605   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:15.392399   39074 cri.go:89] found id: ""
	I1002 20:21:15.392414   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.392422   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:15.392428   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:15.392486   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:15.416043   39074 cri.go:89] found id: ""
	I1002 20:21:15.416056   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.416062   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:15.416066   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:15.416110   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:15.440250   39074 cri.go:89] found id: ""
	I1002 20:21:15.440263   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.440269   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:15.440276   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:15.440285   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:15.467533   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:15.467548   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:15.533766   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:15.533790   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:15.544835   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:15.544851   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:15.599678   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:15.592798   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.593349   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.594871   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.595307   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.596919   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:15.592798   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.593349   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.594871   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.595307   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.596919   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:15.599691   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:15.599702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:18.165132   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:18.175676   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:18.175725   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:18.199922   39074 cri.go:89] found id: ""
	I1002 20:21:18.199940   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.199946   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:18.199951   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:18.199992   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:18.223152   39074 cri.go:89] found id: ""
	I1002 20:21:18.223169   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.223177   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:18.223184   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:18.223227   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:18.246742   39074 cri.go:89] found id: ""
	I1002 20:21:18.246757   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.246766   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:18.246772   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:18.246816   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:18.270031   39074 cri.go:89] found id: ""
	I1002 20:21:18.270044   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.270050   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:18.270055   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:18.270106   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:18.294199   39074 cri.go:89] found id: ""
	I1002 20:21:18.294213   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.294220   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:18.294224   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:18.294265   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:18.319955   39074 cri.go:89] found id: ""
	I1002 20:21:18.319968   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.319974   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:18.319979   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:18.320027   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:18.346187   39074 cri.go:89] found id: ""
	I1002 20:21:18.346202   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.346209   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:18.346218   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:18.346230   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:18.412451   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:18.412469   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:18.423898   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:18.423911   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:18.477273   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:18.470574   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.471135   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.472841   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.473326   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.474859   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:18.470574   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.471135   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.472841   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.473326   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.474859   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:18.477287   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:18.477297   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:18.536355   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:18.536373   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:21.066419   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:21.076563   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:21.076666   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:21.102164   39074 cri.go:89] found id: ""
	I1002 20:21:21.102177   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.102183   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:21.102188   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:21.102232   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:21.129158   39074 cri.go:89] found id: ""
	I1002 20:21:21.129173   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.129182   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:21.129188   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:21.129231   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:21.154477   39074 cri.go:89] found id: ""
	I1002 20:21:21.154492   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.154497   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:21.154502   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:21.154546   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:21.180534   39074 cri.go:89] found id: ""
	I1002 20:21:21.180549   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.180555   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:21.180561   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:21.180620   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:21.206019   39074 cri.go:89] found id: ""
	I1002 20:21:21.206031   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.206038   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:21.206046   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:21.206084   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:21.230114   39074 cri.go:89] found id: ""
	I1002 20:21:21.230127   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.230133   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:21.230138   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:21.230178   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:21.254824   39074 cri.go:89] found id: ""
	I1002 20:21:21.254838   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.254844   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:21.254851   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:21.254860   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:21.317018   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:21.317035   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:21.343844   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:21.343858   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:21.408925   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:21.408944   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:21.419821   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:21.419835   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:21.471978   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:21.465582   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.466082   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467579   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467942   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.469419   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:21.465582   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.466082   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467579   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467942   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.469419   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:23.973621   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:23.984622   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:23.984691   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:24.008789   39074 cri.go:89] found id: ""
	I1002 20:21:24.008805   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.008814   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:24.008820   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:24.008867   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:24.034564   39074 cri.go:89] found id: ""
	I1002 20:21:24.034581   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.034596   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:24.034603   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:24.034643   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:24.059176   39074 cri.go:89] found id: ""
	I1002 20:21:24.059189   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.059194   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:24.059199   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:24.059247   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:24.083475   39074 cri.go:89] found id: ""
	I1002 20:21:24.083488   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.083495   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:24.083499   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:24.083550   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:24.108059   39074 cri.go:89] found id: ""
	I1002 20:21:24.108072   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.108078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:24.108083   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:24.108124   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:24.132959   39074 cri.go:89] found id: ""
	I1002 20:21:24.132973   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.132978   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:24.132983   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:24.133023   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:24.157626   39074 cri.go:89] found id: ""
	I1002 20:21:24.157638   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.157644   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:24.157666   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:24.157677   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:24.222240   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:24.222258   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:24.252463   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:24.252477   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:24.322663   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:24.322681   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:24.334105   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:24.334119   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:24.388449   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:24.381839   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.382360   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.383974   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.384421   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.385999   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:24.381839   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.382360   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.383974   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.384421   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.385999   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:26.890112   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:26.900667   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:26.900710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:26.924781   39074 cri.go:89] found id: ""
	I1002 20:21:26.924794   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.924800   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:26.924805   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:26.924846   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:26.948571   39074 cri.go:89] found id: ""
	I1002 20:21:26.948586   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.948600   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:26.948606   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:26.948661   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:26.972451   39074 cri.go:89] found id: ""
	I1002 20:21:26.972466   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.972472   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:26.972478   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:26.972525   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:26.997499   39074 cri.go:89] found id: ""
	I1002 20:21:26.997512   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.997518   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:26.997523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:26.997572   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:27.022056   39074 cri.go:89] found id: ""
	I1002 20:21:27.022072   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.022078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:27.022083   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:27.022124   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:27.046069   39074 cri.go:89] found id: ""
	I1002 20:21:27.046083   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.046089   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:27.046095   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:27.046135   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:27.070455   39074 cri.go:89] found id: ""
	I1002 20:21:27.070469   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.070475   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:27.070482   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:27.070493   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:27.139300   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:27.139317   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:27.150073   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:27.150086   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:27.203171   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:27.196472   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.196973   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198530   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198931   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.200409   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:27.196472   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.196973   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198530   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198931   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.200409   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:27.203181   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:27.203189   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:27.265474   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:27.265492   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:29.793992   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:29.804235   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:29.804279   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:29.828729   39074 cri.go:89] found id: ""
	I1002 20:21:29.828743   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.828751   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:29.828757   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:29.828809   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:29.853355   39074 cri.go:89] found id: ""
	I1002 20:21:29.853372   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.853382   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:29.853388   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:29.853439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:29.878218   39074 cri.go:89] found id: ""
	I1002 20:21:29.878231   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.878236   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:29.878241   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:29.878281   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:29.903091   39074 cri.go:89] found id: ""
	I1002 20:21:29.903105   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.903114   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:29.903120   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:29.903161   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:29.927692   39074 cri.go:89] found id: ""
	I1002 20:21:29.927710   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.927716   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:29.927720   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:29.927769   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:29.952593   39074 cri.go:89] found id: ""
	I1002 20:21:29.952608   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.952618   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:29.952624   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:29.952693   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:29.977117   39074 cri.go:89] found id: ""
	I1002 20:21:29.977133   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.977140   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:29.977150   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:29.977161   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:30.004687   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:30.004701   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:30.071166   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:30.071188   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:30.082387   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:30.082403   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:30.137131   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:30.130268   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.130846   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132362   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132758   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.134348   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:30.130268   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.130846   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132362   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132758   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.134348   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:30.137140   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:30.137148   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:32.698009   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:32.708134   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:32.708177   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:32.734103   39074 cri.go:89] found id: ""
	I1002 20:21:32.734117   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.734126   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:32.734131   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:32.734179   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:32.758404   39074 cri.go:89] found id: ""
	I1002 20:21:32.758417   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.758423   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:32.758431   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:32.758477   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:32.784135   39074 cri.go:89] found id: ""
	I1002 20:21:32.784150   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.784157   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:32.784161   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:32.784204   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:32.809641   39074 cri.go:89] found id: ""
	I1002 20:21:32.809684   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.809693   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:32.809697   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:32.809739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:32.833831   39074 cri.go:89] found id: ""
	I1002 20:21:32.833847   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.833856   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:32.833862   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:32.833918   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:32.858510   39074 cri.go:89] found id: ""
	I1002 20:21:32.858523   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.858531   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:32.858537   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:32.858590   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:32.882883   39074 cri.go:89] found id: ""
	I1002 20:21:32.882898   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.882907   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:32.882916   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:32.882928   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:32.951104   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:32.951125   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:32.962042   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:32.962058   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:33.015746   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:33.009215   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.009701   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011251   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011629   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.013187   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:33.009215   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.009701   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011251   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011629   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.013187   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:33.015758   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:33.015772   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:33.074804   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:33.074821   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:35.603185   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:35.613834   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:35.613876   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:35.638330   39074 cri.go:89] found id: ""
	I1002 20:21:35.638342   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.638348   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:35.638353   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:35.638391   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:35.661464   39074 cri.go:89] found id: ""
	I1002 20:21:35.661476   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.661482   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:35.661487   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:35.661529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:35.684962   39074 cri.go:89] found id: ""
	I1002 20:21:35.684977   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.684983   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:35.684987   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:35.685036   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:35.708990   39074 cri.go:89] found id: ""
	I1002 20:21:35.709002   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.709007   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:35.709012   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:35.709054   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:35.732099   39074 cri.go:89] found id: ""
	I1002 20:21:35.732116   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.732125   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:35.732134   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:35.732179   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:35.756437   39074 cri.go:89] found id: ""
	I1002 20:21:35.756450   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.756456   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:35.756461   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:35.756501   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:35.782205   39074 cri.go:89] found id: ""
	I1002 20:21:35.782219   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.782225   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:35.782231   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:35.782240   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:35.849923   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:35.849941   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:35.861090   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:35.861104   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:35.914924   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:35.908496   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.908987   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.910547   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.911018   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.912498   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:35.908496   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.908987   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.910547   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.911018   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.912498   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:35.914934   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:35.914943   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:35.975011   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:35.975031   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:38.503369   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:38.513583   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:38.513630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:38.538175   39074 cri.go:89] found id: ""
	I1002 20:21:38.538190   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.538197   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:38.538201   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:38.538239   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:38.562421   39074 cri.go:89] found id: ""
	I1002 20:21:38.562434   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.562440   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:38.562444   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:38.562510   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:38.587376   39074 cri.go:89] found id: ""
	I1002 20:21:38.587388   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.587394   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:38.587400   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:38.587439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:38.611178   39074 cri.go:89] found id: ""
	I1002 20:21:38.611192   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.611198   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:38.611202   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:38.611243   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:38.635805   39074 cri.go:89] found id: ""
	I1002 20:21:38.635817   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.635823   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:38.635827   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:38.635872   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:38.660043   39074 cri.go:89] found id: ""
	I1002 20:21:38.660065   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.660071   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:38.660075   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:38.660115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:38.683490   39074 cri.go:89] found id: ""
	I1002 20:21:38.683502   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.683508   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:38.683515   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:38.683522   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:38.741516   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:38.741534   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:38.769294   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:38.769308   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:38.838736   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:38.838753   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:38.849582   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:38.849612   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:38.903424   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:38.896399   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.896943   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898498   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898964   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.900463   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:38.896399   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.896943   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898498   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898964   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.900463   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:41.405089   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:41.415377   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:41.415426   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:41.440687   39074 cri.go:89] found id: ""
	I1002 20:21:41.440700   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.440707   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:41.440712   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:41.440755   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:41.465054   39074 cri.go:89] found id: ""
	I1002 20:21:41.465075   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.465081   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:41.465086   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:41.465140   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:41.489735   39074 cri.go:89] found id: ""
	I1002 20:21:41.489748   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.489754   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:41.489759   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:41.489799   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:41.514723   39074 cri.go:89] found id: ""
	I1002 20:21:41.514735   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.514740   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:41.514745   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:41.514786   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:41.538573   39074 cri.go:89] found id: ""
	I1002 20:21:41.538586   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.538592   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:41.538597   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:41.538669   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:41.563317   39074 cri.go:89] found id: ""
	I1002 20:21:41.563334   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.563343   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:41.563349   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:41.563389   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:41.587493   39074 cri.go:89] found id: ""
	I1002 20:21:41.587509   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.587515   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:41.587522   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:41.587532   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:41.657445   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:41.657473   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:41.668994   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:41.669012   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:41.722898   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:41.715908   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.716372   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718002   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718454   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.720024   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:41.715908   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.716372   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718002   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718454   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.720024   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:41.722911   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:41.722919   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:41.780887   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:41.780909   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:44.310936   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:44.322755   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:44.322807   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:44.347939   39074 cri.go:89] found id: ""
	I1002 20:21:44.347951   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.347958   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:44.347962   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:44.348004   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:44.372444   39074 cri.go:89] found id: ""
	I1002 20:21:44.372460   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.372466   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:44.372472   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:44.372514   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:44.397131   39074 cri.go:89] found id: ""
	I1002 20:21:44.397148   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.397157   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:44.397163   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:44.397215   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:44.421209   39074 cri.go:89] found id: ""
	I1002 20:21:44.421222   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.421228   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:44.421232   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:44.421269   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:44.445113   39074 cri.go:89] found id: ""
	I1002 20:21:44.445125   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.445131   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:44.445135   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:44.445178   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:44.469164   39074 cri.go:89] found id: ""
	I1002 20:21:44.469178   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.469185   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:44.469191   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:44.469248   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:44.494058   39074 cri.go:89] found id: ""
	I1002 20:21:44.494070   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.494076   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:44.494083   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:44.494091   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:44.563166   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:44.563185   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:44.574587   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:44.574601   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:44.627643   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:44.620697   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.621137   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.622679   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.623151   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.624644   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:44.620697   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.621137   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.622679   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.623151   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.624644   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:44.627670   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:44.627681   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:44.688606   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:44.688623   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:47.218714   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:47.229181   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:47.229224   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:47.254586   39074 cri.go:89] found id: ""
	I1002 20:21:47.254600   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.254607   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:47.254611   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:47.254666   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:47.277466   39074 cri.go:89] found id: ""
	I1002 20:21:47.277479   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.277485   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:47.277489   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:47.277529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:47.300741   39074 cri.go:89] found id: ""
	I1002 20:21:47.300754   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.300759   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:47.300764   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:47.300819   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:47.325015   39074 cri.go:89] found id: ""
	I1002 20:21:47.325030   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.325037   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:47.325042   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:47.325086   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:47.349241   39074 cri.go:89] found id: ""
	I1002 20:21:47.349256   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.349264   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:47.349270   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:47.349322   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:47.373778   39074 cri.go:89] found id: ""
	I1002 20:21:47.373790   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.373796   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:47.373801   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:47.373847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:47.397514   39074 cri.go:89] found id: ""
	I1002 20:21:47.397527   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.397532   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:47.397539   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:47.397550   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:47.452728   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:47.446108   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.446609   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448123   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448540   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.450035   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:47.446108   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.446609   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448123   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448540   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.450035   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:47.452738   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:47.452748   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:47.513401   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:47.513419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:47.542325   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:47.542339   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:47.607380   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:47.607397   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:50.119560   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:50.129969   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:50.130031   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:50.154300   39074 cri.go:89] found id: ""
	I1002 20:21:50.154314   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.154322   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:50.154329   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:50.154381   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:50.178814   39074 cri.go:89] found id: ""
	I1002 20:21:50.178831   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.178840   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:50.178846   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:50.178886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:50.202532   39074 cri.go:89] found id: ""
	I1002 20:21:50.202546   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.202553   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:50.202558   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:50.202597   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:50.227602   39074 cri.go:89] found id: ""
	I1002 20:21:50.227620   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.227630   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:50.227636   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:50.227705   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:50.254467   39074 cri.go:89] found id: ""
	I1002 20:21:50.254479   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.254485   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:50.254490   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:50.254534   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:50.279114   39074 cri.go:89] found id: ""
	I1002 20:21:50.279132   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.279141   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:50.279147   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:50.279196   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:50.303673   39074 cri.go:89] found id: ""
	I1002 20:21:50.303689   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.303695   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:50.303703   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:50.303712   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:50.367227   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:50.367244   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:50.394498   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:50.394517   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:50.463556   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:50.463573   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:50.475248   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:50.475266   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:50.530138   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:50.523630   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.524260   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.525840   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.526247   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.527437   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:50.523630   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.524260   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.525840   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.526247   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.527437   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:53.031819   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:53.042276   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:53.042319   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:53.066835   39074 cri.go:89] found id: ""
	I1002 20:21:53.066850   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.066865   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:53.066872   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:53.066914   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:53.090995   39074 cri.go:89] found id: ""
	I1002 20:21:53.091008   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.091014   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:53.091018   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:53.091057   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:53.116027   39074 cri.go:89] found id: ""
	I1002 20:21:53.116043   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.116051   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:53.116056   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:53.116097   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:53.141627   39074 cri.go:89] found id: ""
	I1002 20:21:53.141640   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.141661   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:53.141668   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:53.141710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:53.167140   39074 cri.go:89] found id: ""
	I1002 20:21:53.167157   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.167163   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:53.167167   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:53.167210   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:53.190437   39074 cri.go:89] found id: ""
	I1002 20:21:53.190453   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.190459   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:53.190464   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:53.190506   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:53.214513   39074 cri.go:89] found id: ""
	I1002 20:21:53.214527   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.214534   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:53.214541   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:53.214550   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:53.282233   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:53.282249   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:53.293348   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:53.293361   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:53.347988   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:53.341334   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.341823   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343307   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343741   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.345249   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:53.341334   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.341823   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343307   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343741   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.345249   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:53.347998   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:53.348008   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:53.407000   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:53.407019   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:55.936592   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:55.946748   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:55.946803   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:55.971330   39074 cri.go:89] found id: ""
	I1002 20:21:55.971347   39074 logs.go:282] 0 containers: []
	W1002 20:21:55.971353   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:55.971358   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:55.971398   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:55.995571   39074 cri.go:89] found id: ""
	I1002 20:21:55.995585   39074 logs.go:282] 0 containers: []
	W1002 20:21:55.995591   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:55.995595   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:55.995635   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:56.020541   39074 cri.go:89] found id: ""
	I1002 20:21:56.020563   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.020573   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:56.020578   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:56.020620   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:56.045458   39074 cri.go:89] found id: ""
	I1002 20:21:56.045474   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.045480   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:56.045485   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:56.045524   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:56.069082   39074 cri.go:89] found id: ""
	I1002 20:21:56.069094   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.069101   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:56.069105   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:56.069150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:56.094402   39074 cri.go:89] found id: ""
	I1002 20:21:56.094417   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.094425   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:56.094430   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:56.094471   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:56.118733   39074 cri.go:89] found id: ""
	I1002 20:21:56.118748   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.118755   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:56.118764   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:56.118776   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:56.186773   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:56.186792   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:56.198306   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:56.198321   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:56.253135   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:56.246592   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.247035   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.248560   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.249003   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.250528   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:56.246592   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.247035   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.248560   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.249003   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.250528   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:56.253144   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:56.253156   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:56.313368   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:56.313384   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:58.841758   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:58.852748   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:58.852795   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:58.878085   39074 cri.go:89] found id: ""
	I1002 20:21:58.878101   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.878109   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:58.878115   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:58.878169   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:58.903034   39074 cri.go:89] found id: ""
	I1002 20:21:58.903047   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.903054   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:58.903058   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:58.903097   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:58.928063   39074 cri.go:89] found id: ""
	I1002 20:21:58.928079   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.928085   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:58.928090   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:58.928132   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:58.953963   39074 cri.go:89] found id: ""
	I1002 20:21:58.953976   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.953982   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:58.953987   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:58.954039   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:58.980346   39074 cri.go:89] found id: ""
	I1002 20:21:58.980363   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.980372   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:58.980379   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:58.980430   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:59.006332   39074 cri.go:89] found id: ""
	I1002 20:21:59.006348   39074 logs.go:282] 0 containers: []
	W1002 20:21:59.006357   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:59.006364   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:59.006422   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:59.030980   39074 cri.go:89] found id: ""
	I1002 20:21:59.030995   39074 logs.go:282] 0 containers: []
	W1002 20:21:59.031004   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:59.031013   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:59.031026   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:59.086481   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:59.079666   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.080350   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.081980   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.082417   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.083935   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:59.079666   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.080350   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.081980   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.082417   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.083935   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:59.086489   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:59.086498   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:59.150520   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:59.150539   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:59.178745   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:59.178759   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:59.248128   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:59.248146   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:01.761244   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:01.771733   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:01.771783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:01.796879   39074 cri.go:89] found id: ""
	I1002 20:22:01.796894   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.796903   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:01.796908   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:01.796951   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:01.822376   39074 cri.go:89] found id: ""
	I1002 20:22:01.822389   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.822395   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:01.822400   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:01.822445   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:01.847608   39074 cri.go:89] found id: ""
	I1002 20:22:01.847622   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.847628   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:01.847633   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:01.847701   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:01.872893   39074 cri.go:89] found id: ""
	I1002 20:22:01.872913   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.872919   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:01.872924   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:01.872995   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:01.899179   39074 cri.go:89] found id: ""
	I1002 20:22:01.899197   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.899205   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:01.899210   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:01.899258   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:01.925133   39074 cri.go:89] found id: ""
	I1002 20:22:01.925149   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.925158   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:01.925165   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:01.925209   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:01.951281   39074 cri.go:89] found id: ""
	I1002 20:22:01.951294   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.951300   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:01.951307   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:01.951316   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:02.008670   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:02.001480   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.002372   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004005   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004404   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.005795   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:02.001480   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.002372   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004005   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004404   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.005795   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:02.008684   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:02.008697   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:02.072947   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:02.072969   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:02.102011   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:02.102027   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:02.168431   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:02.168449   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:04.680455   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:04.690926   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:04.690981   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:04.715368   39074 cri.go:89] found id: ""
	I1002 20:22:04.715384   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.715390   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:04.715394   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:04.715438   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:04.739937   39074 cri.go:89] found id: ""
	I1002 20:22:04.739951   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.739956   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:04.739960   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:04.739998   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:04.763534   39074 cri.go:89] found id: ""
	I1002 20:22:04.763546   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.763552   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:04.763556   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:04.763615   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:04.788497   39074 cri.go:89] found id: ""
	I1002 20:22:04.788512   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.788519   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:04.788523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:04.788571   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:04.813000   39074 cri.go:89] found id: ""
	I1002 20:22:04.813012   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.813018   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:04.813022   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:04.813061   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:04.837324   39074 cri.go:89] found id: ""
	I1002 20:22:04.837336   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.837342   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:04.837347   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:04.837387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:04.863392   39074 cri.go:89] found id: ""
	I1002 20:22:04.863404   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.863410   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:04.863416   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:04.863425   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:04.917001   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:04.910495   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.911044   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912561   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912981   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.914415   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:04.910495   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.911044   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912561   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912981   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.914415   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:04.917008   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:04.917017   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:04.980350   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:04.980366   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:05.007566   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:05.007580   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:05.076403   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:05.076419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:07.589145   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:07.599347   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:07.599390   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:07.623799   39074 cri.go:89] found id: ""
	I1002 20:22:07.623812   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.623818   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:07.623823   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:07.623862   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:07.648210   39074 cri.go:89] found id: ""
	I1002 20:22:07.648222   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.648229   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:07.648233   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:07.648279   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:07.672861   39074 cri.go:89] found id: ""
	I1002 20:22:07.672874   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.672880   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:07.672885   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:07.672933   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:07.696504   39074 cri.go:89] found id: ""
	I1002 20:22:07.696521   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.696530   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:07.696535   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:07.696577   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:07.722324   39074 cri.go:89] found id: ""
	I1002 20:22:07.722340   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.722346   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:07.722351   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:07.722391   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:07.748388   39074 cri.go:89] found id: ""
	I1002 20:22:07.748402   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.748408   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:07.748412   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:07.748449   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:07.773539   39074 cri.go:89] found id: ""
	I1002 20:22:07.773557   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.773564   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:07.773570   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:07.773579   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:07.843853   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:07.843875   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:07.855493   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:07.855511   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:07.909935   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:07.903152   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.903746   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905310   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905743   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.907276   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:07.903152   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.903746   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905310   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905743   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.907276   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:07.909945   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:07.909955   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:07.971055   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:07.971072   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:10.498842   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:10.509052   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:10.509100   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:10.532641   39074 cri.go:89] found id: ""
	I1002 20:22:10.532673   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.532683   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:10.532689   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:10.532737   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:10.555850   39074 cri.go:89] found id: ""
	I1002 20:22:10.555865   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.555872   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:10.555877   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:10.555943   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:10.579608   39074 cri.go:89] found id: ""
	I1002 20:22:10.579623   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.579631   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:10.579636   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:10.579701   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:10.603930   39074 cri.go:89] found id: ""
	I1002 20:22:10.603945   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.603954   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:10.603960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:10.604006   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:10.627050   39074 cri.go:89] found id: ""
	I1002 20:22:10.627063   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.627070   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:10.627074   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:10.627115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:10.650231   39074 cri.go:89] found id: ""
	I1002 20:22:10.650246   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.650254   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:10.650261   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:10.650309   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:10.674381   39074 cri.go:89] found id: ""
	I1002 20:22:10.674396   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.674404   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:10.674413   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:10.674422   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:10.743365   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:10.743388   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:10.754432   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:10.754446   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:10.809037   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:10.802468   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.802995   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804524   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804992   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.806544   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:10.802468   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.802995   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804524   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804992   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.806544   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:10.809051   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:10.809061   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:10.866627   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:10.866642   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:13.395270   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:13.405561   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:13.405603   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:13.429063   39074 cri.go:89] found id: ""
	I1002 20:22:13.429076   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.429081   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:13.429086   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:13.429125   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:13.452589   39074 cri.go:89] found id: ""
	I1002 20:22:13.452604   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.452609   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:13.452613   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:13.452669   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:13.476844   39074 cri.go:89] found id: ""
	I1002 20:22:13.476856   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.476862   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:13.476866   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:13.476905   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:13.501936   39074 cri.go:89] found id: ""
	I1002 20:22:13.501948   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.501955   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:13.501960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:13.502000   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:13.526895   39074 cri.go:89] found id: ""
	I1002 20:22:13.526907   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.526913   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:13.526917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:13.526968   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:13.550888   39074 cri.go:89] found id: ""
	I1002 20:22:13.550902   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.550910   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:13.550914   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:13.550960   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:13.573769   39074 cri.go:89] found id: ""
	I1002 20:22:13.573784   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.573790   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:13.573796   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:13.573807   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:13.626468   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:13.620002   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.620523   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622171   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622562   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.623979   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:13.620002   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.620523   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622171   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622562   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.623979   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:13.626477   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:13.626485   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:13.685732   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:13.685747   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:13.713954   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:13.713970   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:13.785525   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:13.785541   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:16.298756   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:16.309103   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:16.309143   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:16.335506   39074 cri.go:89] found id: ""
	I1002 20:22:16.335521   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.335529   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:16.335535   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:16.335586   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:16.359417   39074 cri.go:89] found id: ""
	I1002 20:22:16.359431   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.359437   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:16.359442   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:16.359482   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:16.383496   39074 cri.go:89] found id: ""
	I1002 20:22:16.383509   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.383517   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:16.383523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:16.383578   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:16.409227   39074 cri.go:89] found id: ""
	I1002 20:22:16.409243   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.409250   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:16.409254   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:16.409294   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:16.433847   39074 cri.go:89] found id: ""
	I1002 20:22:16.433861   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.433870   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:16.433876   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:16.433933   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:16.457278   39074 cri.go:89] found id: ""
	I1002 20:22:16.457293   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.457299   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:16.457306   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:16.457345   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:16.482697   39074 cri.go:89] found id: ""
	I1002 20:22:16.482709   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.482715   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:16.482721   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:16.482730   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:16.548732   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:16.548752   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:16.559732   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:16.559747   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:16.612487   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:16.606170   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.606702   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608183   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608579   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.610023   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:16.606170   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.606702   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608183   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608579   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.610023   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:16.612499   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:16.612511   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:16.671684   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:16.671702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:19.200094   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:19.210479   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:19.210527   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:19.235486   39074 cri.go:89] found id: ""
	I1002 20:22:19.235501   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.235510   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:19.235515   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:19.235560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:19.259294   39074 cri.go:89] found id: ""
	I1002 20:22:19.259305   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.259312   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:19.259316   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:19.259353   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:19.283859   39074 cri.go:89] found id: ""
	I1002 20:22:19.283875   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.283884   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:19.283889   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:19.283941   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:19.307454   39074 cri.go:89] found id: ""
	I1002 20:22:19.307468   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.307473   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:19.307477   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:19.307519   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:19.332321   39074 cri.go:89] found id: ""
	I1002 20:22:19.332334   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.332340   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:19.332345   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:19.332384   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:19.356798   39074 cri.go:89] found id: ""
	I1002 20:22:19.356818   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.356826   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:19.356832   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:19.356886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:19.382609   39074 cri.go:89] found id: ""
	I1002 20:22:19.382624   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.382632   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:19.382641   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:19.382662   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:19.409876   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:19.409890   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:19.476525   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:19.476540   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:19.487600   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:19.487616   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:19.540532   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:19.533762   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.534339   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.535957   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.536415   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.537924   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:19.533762   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.534339   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.535957   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.536415   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.537924   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:19.540541   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:19.540552   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:22.106355   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:22.116499   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:22.116552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:22.142485   39074 cri.go:89] found id: ""
	I1002 20:22:22.142499   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.142507   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:22.142514   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:22.142561   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:22.168287   39074 cri.go:89] found id: ""
	I1002 20:22:22.168301   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.168308   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:22.168312   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:22.168352   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:22.192639   39074 cri.go:89] found id: ""
	I1002 20:22:22.192666   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.192674   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:22.192680   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:22.192726   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:22.217360   39074 cri.go:89] found id: ""
	I1002 20:22:22.217375   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.217383   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:22.217390   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:22.217436   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:22.241729   39074 cri.go:89] found id: ""
	I1002 20:22:22.241744   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.241753   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:22.241759   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:22.241809   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:22.266793   39074 cri.go:89] found id: ""
	I1002 20:22:22.266810   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.266817   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:22.266822   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:22.266866   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:22.289775   39074 cri.go:89] found id: ""
	I1002 20:22:22.289789   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.289794   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:22.289801   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:22.289809   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:22.344340   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:22.337274   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.337797   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339350   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339784   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.341397   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:22.337274   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.337797   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339350   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339784   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.341397   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:22.344350   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:22.344362   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:22.404393   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:22.404410   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:22.432171   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:22.432186   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:22.498216   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:22.498233   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:25.010156   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:25.020516   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:25.020560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:25.045455   39074 cri.go:89] found id: ""
	I1002 20:22:25.045470   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.045480   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:25.045486   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:25.045529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:25.070018   39074 cri.go:89] found id: ""
	I1002 20:22:25.070031   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.070037   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:25.070041   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:25.070080   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:25.093191   39074 cri.go:89] found id: ""
	I1002 20:22:25.093204   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.093210   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:25.093214   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:25.093257   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:25.117770   39074 cri.go:89] found id: ""
	I1002 20:22:25.117782   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.117788   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:25.117793   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:25.117834   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:25.141300   39074 cri.go:89] found id: ""
	I1002 20:22:25.141315   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.141325   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:25.141331   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:25.141383   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:25.165980   39074 cri.go:89] found id: ""
	I1002 20:22:25.165993   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.165999   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:25.166003   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:25.166041   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:25.191730   39074 cri.go:89] found id: ""
	I1002 20:22:25.191742   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.191749   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:25.191757   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:25.191766   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:25.259005   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:25.259025   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:25.270639   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:25.270673   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:25.324592   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:25.317379   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.317967   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.319572   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.320064   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.321617   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:25.317379   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.317967   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.319572   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.320064   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.321617   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:25.324602   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:25.324614   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:25.385501   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:25.385519   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:27.914463   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:27.925227   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:27.925271   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:27.948666   39074 cri.go:89] found id: ""
	I1002 20:22:27.948681   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.948690   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:27.948695   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:27.948735   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:27.972698   39074 cri.go:89] found id: ""
	I1002 20:22:27.972711   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.972716   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:27.972720   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:27.972765   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:27.996954   39074 cri.go:89] found id: ""
	I1002 20:22:27.996970   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.996979   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:27.996984   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:27.997029   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:28.022092   39074 cri.go:89] found id: ""
	I1002 20:22:28.022109   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.022117   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:28.022123   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:28.022164   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:28.047808   39074 cri.go:89] found id: ""
	I1002 20:22:28.047824   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.047831   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:28.047836   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:28.047876   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:28.071793   39074 cri.go:89] found id: ""
	I1002 20:22:28.071807   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.071816   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:28.071822   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:28.071868   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:28.096447   39074 cri.go:89] found id: ""
	I1002 20:22:28.096462   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.096471   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:28.096479   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:28.096489   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:28.107018   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:28.107032   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:28.159925   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:28.153221   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.153766   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155300   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155764   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.157273   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:28.153221   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.153766   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155300   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155764   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.157273   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:28.159935   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:28.159945   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:28.219759   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:28.219776   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:28.247325   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:28.247345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:30.813772   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:30.824079   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:30.824122   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:30.847714   39074 cri.go:89] found id: ""
	I1002 20:22:30.847727   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.847734   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:30.847739   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:30.847783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:30.870579   39074 cri.go:89] found id: ""
	I1002 20:22:30.870612   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.870619   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:30.870623   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:30.870686   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:30.894513   39074 cri.go:89] found id: ""
	I1002 20:22:30.894528   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.894537   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:30.894542   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:30.894591   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:30.919171   39074 cri.go:89] found id: ""
	I1002 20:22:30.919186   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.919191   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:30.919196   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:30.919236   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:30.943990   39074 cri.go:89] found id: ""
	I1002 20:22:30.944003   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.944009   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:30.944013   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:30.944054   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:30.968147   39074 cri.go:89] found id: ""
	I1002 20:22:30.968162   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.968170   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:30.968178   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:30.968227   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:30.991705   39074 cri.go:89] found id: ""
	I1002 20:22:30.991717   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.991722   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:30.991729   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:30.991740   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:31.046303   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:31.038433   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.039034   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.040929   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.041723   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.043230   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:31.038433   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.039034   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.040929   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.041723   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.043230   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:31.046314   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:31.046325   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:31.105380   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:31.105397   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:31.132347   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:31.132363   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:31.202102   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:31.202119   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:33.715172   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:33.725339   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:33.725386   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:33.750520   39074 cri.go:89] found id: ""
	I1002 20:22:33.750534   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.750543   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:33.750549   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:33.750595   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:33.773913   39074 cri.go:89] found id: ""
	I1002 20:22:33.773928   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.773937   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:33.773943   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:33.773991   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:33.797530   39074 cri.go:89] found id: ""
	I1002 20:22:33.797545   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.797554   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:33.797560   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:33.797630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:33.821852   39074 cri.go:89] found id: ""
	I1002 20:22:33.821871   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.821879   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:33.821885   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:33.821934   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:33.846332   39074 cri.go:89] found id: ""
	I1002 20:22:33.846348   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.846356   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:33.846362   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:33.846400   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:33.870615   39074 cri.go:89] found id: ""
	I1002 20:22:33.870629   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.870639   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:33.870657   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:33.870706   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:33.895226   39074 cri.go:89] found id: ""
	I1002 20:22:33.895241   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.895250   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:33.895266   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:33.895276   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:33.955530   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:33.955547   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:33.983183   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:33.983198   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:34.049224   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:34.049251   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:34.060667   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:34.060686   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:34.114666   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:34.107838   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.108343   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.109897   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.110325   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.111840   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:34.107838   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.108343   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.109897   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.110325   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.111840   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:36.616388   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:36.626616   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:36.626688   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:36.652926   39074 cri.go:89] found id: ""
	I1002 20:22:36.652947   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.652957   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:36.652965   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:36.653011   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:36.676048   39074 cri.go:89] found id: ""
	I1002 20:22:36.676060   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.676066   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:36.676071   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:36.676115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:36.700475   39074 cri.go:89] found id: ""
	I1002 20:22:36.700489   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.700499   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:36.700505   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:36.700546   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:36.724541   39074 cri.go:89] found id: ""
	I1002 20:22:36.724559   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.724567   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:36.724576   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:36.724623   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:36.748967   39074 cri.go:89] found id: ""
	I1002 20:22:36.748982   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.748991   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:36.748997   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:36.749043   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:36.773168   39074 cri.go:89] found id: ""
	I1002 20:22:36.773183   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.773191   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:36.773197   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:36.773249   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:36.796981   39074 cri.go:89] found id: ""
	I1002 20:22:36.796997   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.797003   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:36.797011   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:36.797023   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:36.867000   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:36.867018   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:36.878017   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:36.878031   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:36.931114   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:36.924389   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.924871   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926348   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926766   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.928319   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:36.924389   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.924871   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926348   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926766   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.928319   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:36.931129   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:36.931137   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:36.993849   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:36.993868   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:39.524626   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:39.535502   39074 kubeadm.go:601] duration metric: took 4m1.714069333s to restartPrimaryControlPlane
	W1002 20:22:39.535572   39074 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 20:22:39.535638   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:22:39.981011   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:22:39.993244   39074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:22:40.001158   39074 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:22:40.001211   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:22:40.008736   39074 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:22:40.008749   39074 kubeadm.go:157] found existing configuration files:
	
	I1002 20:22:40.008782   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:22:40.015964   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:22:40.016000   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:22:40.022839   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:22:40.030026   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:22:40.030064   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:22:40.036752   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:22:40.043720   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:22:40.043755   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:22:40.050532   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:22:40.057416   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:22:40.057453   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:22:40.063936   39074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:22:40.116427   39074 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:22:40.171173   39074 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:26:42.624936   39074 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:26:42.625021   39074 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:26:42.627908   39074 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:26:42.627954   39074 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:26:42.628043   39074 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:26:42.628106   39074 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:26:42.628137   39074 kubeadm.go:318] OS: Linux
	I1002 20:26:42.628173   39074 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:26:42.628211   39074 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:26:42.628278   39074 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:26:42.628331   39074 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:26:42.628370   39074 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:26:42.628412   39074 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:26:42.628451   39074 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:26:42.628487   39074 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:26:42.628556   39074 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:26:42.628674   39074 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:26:42.628787   39074 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:26:42.628860   39074 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:26:42.630666   39074 out.go:252]   - Generating certificates and keys ...
	I1002 20:26:42.630736   39074 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:26:42.630813   39074 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:26:42.630900   39074 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:26:42.630973   39074 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:26:42.631035   39074 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:26:42.631078   39074 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:26:42.631142   39074 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:26:42.631194   39074 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:26:42.631256   39074 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:26:42.631324   39074 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:26:42.631354   39074 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:26:42.631399   39074 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:26:42.631441   39074 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:26:42.631487   39074 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:26:42.631529   39074 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:26:42.631595   39074 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:26:42.631671   39074 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:26:42.631741   39074 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:26:42.631812   39074 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:26:42.633616   39074 out.go:252]   - Booting up control plane ...
	I1002 20:26:42.633716   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:26:42.633796   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:26:42.633850   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:26:42.633948   39074 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:26:42.634026   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:26:42.634114   39074 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:26:42.634190   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:26:42.634222   39074 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:26:42.634348   39074 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:26:42.634448   39074 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:26:42.634515   39074 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000852315s
	I1002 20:26:42.634627   39074 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:26:42.634725   39074 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:26:42.634809   39074 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:26:42.634907   39074 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:26:42.635026   39074 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000487211s
	I1002 20:26:42.635115   39074 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000480662s
	I1002 20:26:42.635180   39074 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000745923s
	I1002 20:26:42.635185   39074 kubeadm.go:318] 
	I1002 20:26:42.635259   39074 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:26:42.635324   39074 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:26:42.635395   39074 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:26:42.635478   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:26:42.635541   39074 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:26:42.635608   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:26:42.635644   39074 kubeadm.go:318] 
	W1002 20:26:42.635735   39074 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000852315s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000487211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000480662s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000745923s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:26:42.635812   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:26:43.072992   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:26:43.084946   39074 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:26:43.084987   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:26:43.092545   39074 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:26:43.092552   39074 kubeadm.go:157] found existing configuration files:
	
	I1002 20:26:43.092583   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:26:43.099679   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:26:43.099725   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:26:43.106411   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:26:43.113271   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:26:43.113302   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:26:43.120089   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:26:43.126923   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:26:43.126953   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:26:43.133686   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:26:43.140427   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:26:43.140454   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:26:43.147131   39074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:26:43.180956   39074 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:26:43.181017   39074 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:26:43.199951   39074 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:26:43.200009   39074 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:26:43.200037   39074 kubeadm.go:318] OS: Linux
	I1002 20:26:43.200076   39074 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:26:43.200114   39074 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:26:43.200153   39074 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:26:43.200196   39074 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:26:43.200234   39074 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:26:43.200272   39074 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:26:43.200315   39074 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:26:43.200350   39074 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:26:43.254197   39074 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:26:43.254330   39074 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:26:43.254435   39074 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:26:43.260331   39074 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:26:43.264543   39074 out.go:252]   - Generating certificates and keys ...
	I1002 20:26:43.264610   39074 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:26:43.264706   39074 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:26:43.264789   39074 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:26:43.264843   39074 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:26:43.264905   39074 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:26:43.264949   39074 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:26:43.265012   39074 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:26:43.265062   39074 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:26:43.265129   39074 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:26:43.265188   39074 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:26:43.265219   39074 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:26:43.265265   39074 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:26:43.505091   39074 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:26:43.932140   39074 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:26:44.064643   39074 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:26:44.173218   39074 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:26:44.534380   39074 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:26:44.534804   39074 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:26:44.538135   39074 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:26:44.539757   39074 out.go:252]   - Booting up control plane ...
	I1002 20:26:44.539881   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:26:44.539950   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:26:44.540002   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:26:44.553179   39074 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:26:44.553329   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:26:44.559491   39074 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:26:44.559770   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:26:44.559808   39074 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:26:44.659881   39074 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:26:44.660026   39074 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:26:45.660495   39074 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000782032s
	I1002 20:26:45.664397   39074 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:26:45.664522   39074 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:26:45.664595   39074 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:26:45.664676   39074 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:30:45.665391   39074 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	I1002 20:30:45.665506   39074 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	I1002 20:30:45.665618   39074 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	I1002 20:30:45.665634   39074 kubeadm.go:318] 
	I1002 20:30:45.665788   39074 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:30:45.665904   39074 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:30:45.665995   39074 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:30:45.666081   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:30:45.666142   39074 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:30:45.666213   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:30:45.666216   39074 kubeadm.go:318] 
	I1002 20:30:45.669103   39074 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:30:45.669219   39074 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:30:45.669740   39074 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:30:45.669792   39074 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:30:45.669843   39074 kubeadm.go:402] duration metric: took 12m7.882478982s to StartCluster
	I1002 20:30:45.669874   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:30:45.669917   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:30:45.695577   39074 cri.go:89] found id: ""
	I1002 20:30:45.695596   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.695603   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:30:45.695610   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:30:45.695674   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:30:45.719440   39074 cri.go:89] found id: ""
	I1002 20:30:45.719456   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.719464   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:30:45.719469   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:30:45.719511   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:30:45.743166   39074 cri.go:89] found id: ""
	I1002 20:30:45.743181   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.743190   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:30:45.743195   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:30:45.743238   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:30:45.767934   39074 cri.go:89] found id: ""
	I1002 20:30:45.767959   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.767967   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:30:45.767974   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:30:45.768019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:30:45.792091   39074 cri.go:89] found id: ""
	I1002 20:30:45.792102   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.792108   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:30:45.792112   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:30:45.792150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:30:45.815448   39074 cri.go:89] found id: ""
	I1002 20:30:45.815463   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.815469   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:30:45.815475   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:30:45.815518   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:30:45.840287   39074 cri.go:89] found id: ""
	I1002 20:30:45.840299   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.840305   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:30:45.840312   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:30:45.840321   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:30:45.868158   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:30:45.868172   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:30:45.936734   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:30:45.936752   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:30:45.948158   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:30:45.948175   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:30:46.002360   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:30:45.995517   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.996138   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.997668   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.998069   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.999573   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:30:45.995517   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.996138   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.997668   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.998069   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.999573   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:30:46.002381   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:30:46.002392   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1002 20:30:46.065214   39074 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:30:46.065257   39074 out.go:285] * 
	W1002 20:30:46.065383   39074 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:30:46.065406   39074 out.go:285] * 
	W1002 20:30:46.067075   39074 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:30:46.070473   39074 out.go:203] 
	W1002 20:30:46.071639   39074 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:30:46.071666   39074 out.go:285] * 
	I1002 20:30:46.072909   39074 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.697507819Z" level=info msg="Checking image status: kicbase/echo-server:functional-753218" id=fa393231-a5fd-49e9-8950-3e6bf6e4053d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720007372Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753218" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720140274Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753218 not found" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720190361Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753218 found" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742733677Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753218" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742868717Z" level=info msg="Image localhost/kicbase/echo-server:functional-753218 not found" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742909978Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753218 found" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.459772794Z" level=info msg="Checking image status: kicbase/echo-server:functional-753218" id=c8f7a097-87b5-4be9-96a8-83c5b0aea5dd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483212464Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753218" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483336385Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753218 not found" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483365009Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753218 found" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508218789Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753218" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508368222Z" level=info msg="Image localhost/kicbase/echo-server:functional-753218 not found" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508409995Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753218 found" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.546136327Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b91303cc-8916-495e-ab50-b39ca6a3e470 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.547120349Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f14b81fb-d2e6-4ab2-80c7-0d6ecf807ca9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.548289765Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-753218/kube-apiserver" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.548564978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.553541497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.554186326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.568588089Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570341207Z" level=info msg="createCtr: deleting container ID 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c from idIndex" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570379579Z" level=info msg="createCtr: removing container 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570421105Z" level=info msg="createCtr: deleting container 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c from storage" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.573125941Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_802f0aebed1bb3dd62306b1d2076fd94_0" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:30:57.530559   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:57.531204   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:57.532168   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:57.534889   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:57.535439   17119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:30:57 up  1:13,  0 user,  load average: 0.40, 0.13, 0.09
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:30:53 functional-753218 kubelet[14925]: E1002 20:30:53.583292   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:53 functional-753218 kubelet[14925]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753218_kube-system(b932b0024653c86a7ea85a2a83a943a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:53 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:53 functional-753218 kubelet[14925]: E1002 20:30:53.583334   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753218" podUID="b932b0024653c86a7ea85a2a83a943a4"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.545043   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.566502   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:54 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:54 functional-753218 kubelet[14925]:  > podSandboxID="6ae6de7d398fa442f7f140a6767c4de14fdad57319542a7b5e3df53c8ac49d18"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.566605   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:54 functional-753218 kubelet[14925]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:54 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.566641   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.545737   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.564007   14925 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753218\" not found"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.573357   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:55 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:55 functional-753218 kubelet[14925]:  > podSandboxID="7a2fde0baea214f3eb0043d508edd186efa5f3f087d902573e164eb4765f9b5b"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.573464   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:55 functional-753218 kubelet[14925]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(802f0aebed1bb3dd62306b1d2076fd94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:55 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.573515   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="802f0aebed1bb3dd62306b1d2076fd94"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.170861   14925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.170842   14925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac673e6f5d5d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753218 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,LastTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753218,}"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: I1002 20:30:56.325790   14925 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.326143   14925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (308.213626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (241.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 20:31:06.658745   12851 retry.go:31] will retry after 11.263269484s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 20:31:17.922431   12851 retry.go:31] will retry after 8.78567419s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 20:31:26.708768   12851 retry.go:31] will retry after 14.347250611s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 20:31:41.056862   12851 retry.go:31] will retry after 32.937341483s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 20:32:13.994540   12851 retry.go:31] will retry after 31.271590905s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (281.087362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (283.222946ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount1 --alsologtostderr -v=1 │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount          │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount3 --alsologtostderr -v=1 │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh            │ functional-753218 ssh findmnt -T /mount1                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service        │ functional-753218 service list -o json                                                                             │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service        │ functional-753218 service --namespace=default --https --url hello-node                                             │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service        │ functional-753218 service hello-node --url --format={{.IP}}                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh            │ functional-753218 ssh findmnt -T /mount1                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ service        │ functional-753218 service hello-node --url                                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh            │ functional-753218 ssh findmnt -T /mount2                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh            │ functional-753218 ssh findmnt -T /mount3                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ mount          │ -p functional-753218 --kill=true                                                                                   │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start          │ -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start          │ -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start          │ -p functional-753218 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-753218 --alsologtostderr -v=1                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ update-context │ functional-753218 update-context --alsologtostderr -v=2                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ update-context │ functional-753218 update-context --alsologtostderr -v=2                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ update-context │ functional-753218 update-context --alsologtostderr -v=2                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image          │ functional-753218 image ls --format short --alsologtostderr                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ ssh            │ functional-753218 ssh pgrep buildkitd                                                                              │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ image          │ functional-753218 image ls --format yaml --alsologtostderr                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image          │ functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr             │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image          │ functional-753218 image ls --format json --alsologtostderr                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image          │ functional-753218 image ls --format table --alsologtostderr                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image          │ functional-753218 image ls                                                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:31:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:31:01.900418   60624 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:31:01.900625   60624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.900633   60624 out.go:374] Setting ErrFile to fd 2...
	I1002 20:31:01.900637   60624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.900837   60624 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:31:01.901233   60624 out.go:368] Setting JSON to false
	I1002 20:31:01.902055   60624 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4411,"bootTime":1759432651,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:31:01.902136   60624 start.go:140] virtualization: kvm guest
	I1002 20:31:01.904282   60624 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:31:01.905775   60624 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:31:01.905831   60624 notify.go:221] Checking for updates...
	I1002 20:31:01.908487   60624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:31:01.909539   60624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:31:01.910782   60624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:31:01.912067   60624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:31:01.913370   60624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:31:01.915249   60624 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:31:01.915917   60624 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:31:01.940532   60624 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:31:01.940722   60624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:31:01.999857   60624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:01.988739527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:31:01.999965   60624 docker.go:319] overlay module found
	I1002 20:31:02.003791   60624 out.go:179] * Using the docker driver based on existing profile
	I1002 20:31:02.005402   60624 start.go:306] selected driver: docker
	I1002 20:31:02.005424   60624 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:31:02.005528   60624 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:31:02.005622   60624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:31:02.065972   60624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:02.054061844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:31:02.066877   60624 cni.go:84] Creating CNI manager for ""
	I1002 20:31:02.066944   60624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:31:02.066994   60624 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:31:02.069107   60624 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 20:34:50 functional-753218 crio[5814]: time="2025-10-02T20:34:50.570640773Z" level=info msg="createCtr: removing container 0f5fd7140c2f682a1016f28b5625575f6b500d50cbc3401b7c9bffb3b101ee8a" id=44261e2e-c317-4d7c-ba73-432758970ac6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:50 functional-753218 crio[5814]: time="2025-10-02T20:34:50.570692283Z" level=info msg="createCtr: deleting container 0f5fd7140c2f682a1016f28b5625575f6b500d50cbc3401b7c9bffb3b101ee8a from storage" id=44261e2e-c317-4d7c-ba73-432758970ac6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:50 functional-753218 crio[5814]: time="2025-10-02T20:34:50.572481992Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_802f0aebed1bb3dd62306b1d2076fd94_0" id=44261e2e-c317-4d7c-ba73-432758970ac6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.545521594Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=dd6e26a3-c06e-4b3b-be73-c7b749a25e73 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.5463849Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=e0c25bc4-aa48-4a06-9e3e-e540b4b67959 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.547298502Z" level=info msg="Creating container: kube-system/etcd-functional-753218/etcd" id=3ba478e8-4386-458e-9d2d-5e20b6c9597c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.547564572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.551129524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.551602497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.568474919Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3ba478e8-4386-458e-9d2d-5e20b6c9597c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.569786677Z" level=info msg="createCtr: deleting container ID 29df9748037323bce2dd5fb782e4ef2b121607d9871574269f15c958e4235331 from idIndex" id=3ba478e8-4386-458e-9d2d-5e20b6c9597c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.56981527Z" level=info msg="createCtr: removing container 29df9748037323bce2dd5fb782e4ef2b121607d9871574269f15c958e4235331" id=3ba478e8-4386-458e-9d2d-5e20b6c9597c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.569841935Z" level=info msg="createCtr: deleting container 29df9748037323bce2dd5fb782e4ef2b121607d9871574269f15c958e4235331 from storage" id=3ba478e8-4386-458e-9d2d-5e20b6c9597c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:51 functional-753218 crio[5814]: time="2025-10-02T20:34:51.5717119Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753218_kube-system_91f2b96cb3e3f380ded17f30e8d873bd_0" id=3ba478e8-4386-458e-9d2d-5e20b6c9597c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.54614247Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=2d0db792-a376-4942-986e-04e7bc2ed0c2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.548118132Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d7530945-ef7f-4aaa-97fc-397311acb54b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.549012244Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-753218/kube-scheduler" id=fa80ea59-f9c9-49ed-8ecf-685ec1f88c39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.54924919Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.55254275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.552993254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.570219621Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fa80ea59-f9c9-49ed-8ecf-685ec1f88c39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.571432944Z" level=info msg="createCtr: deleting container ID 33dec8ab21b30463631dd8c819e0e6576af73cf3e6852d47a8fd6b7c7211fb19 from idIndex" id=fa80ea59-f9c9-49ed-8ecf-685ec1f88c39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.571462839Z" level=info msg="createCtr: removing container 33dec8ab21b30463631dd8c819e0e6576af73cf3e6852d47a8fd6b7c7211fb19" id=fa80ea59-f9c9-49ed-8ecf-685ec1f88c39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.571490977Z" level=info msg="createCtr: deleting container 33dec8ab21b30463631dd8c819e0e6576af73cf3e6852d47a8fd6b7c7211fb19 from storage" id=fa80ea59-f9c9-49ed-8ecf-685ec1f88c39 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:34:54 functional-753218 crio[5814]: time="2025-10-02T20:34:54.573260849Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753218_kube-system_b25a71e49a335bbe853872de1b1e3093_0" id=fa80ea59-f9c9-49ed-8ecf-685ec1f88c39 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:34:55.242428   19164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:34:55.242961   19164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:34:55.244435   19164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:34:55.244884   19164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:34:55.246349   19164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:34:55 up  1:17,  0 user,  load average: 0.03, 0.11, 0.09
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:34:50 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:34:50 functional-753218 kubelet[14925]:  > podSandboxID="7a2fde0baea214f3eb0043d508edd186efa5f3f087d902573e164eb4765f9b5b"
	Oct 02 20:34:50 functional-753218 kubelet[14925]: E1002 20:34:50.572845   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:34:50 functional-753218 kubelet[14925]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(802f0aebed1bb3dd62306b1d2076fd94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:34:50 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:34:50 functional-753218 kubelet[14925]: E1002 20:34:50.572873   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="802f0aebed1bb3dd62306b1d2076fd94"
	Oct 02 20:34:51 functional-753218 kubelet[14925]: E1002 20:34:51.545128   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:34:51 functional-753218 kubelet[14925]: E1002 20:34:51.571955   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:34:51 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:34:51 functional-753218 kubelet[14925]:  > podSandboxID="938004d98ea751eb2eeff411184915e21872d6d9720257a5999ef0864a9cbb1c"
	Oct 02 20:34:51 functional-753218 kubelet[14925]: E1002 20:34:51.572035   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:34:51 functional-753218 kubelet[14925]:         container etcd start failed in pod etcd-functional-753218_kube-system(91f2b96cb3e3f380ded17f30e8d873bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:34:51 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:34:51 functional-753218 kubelet[14925]: E1002 20:34:51.572064   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753218" podUID="91f2b96cb3e3f380ded17f30e8d873bd"
	Oct 02 20:34:54 functional-753218 kubelet[14925]: E1002 20:34:54.204891   14925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:34:54 functional-753218 kubelet[14925]: I1002 20:34:54.393353   14925 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:34:54 functional-753218 kubelet[14925]: E1002 20:34:54.393777   14925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:34:54 functional-753218 kubelet[14925]: E1002 20:34:54.545732   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:34:54 functional-753218 kubelet[14925]: E1002 20:34:54.573485   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:34:54 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:34:54 functional-753218 kubelet[14925]:  > podSandboxID="6ae6de7d398fa442f7f140a6767c4de14fdad57319542a7b5e3df53c8ac49d18"
	Oct 02 20:34:54 functional-753218 kubelet[14925]: E1002 20:34:54.573588   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:34:54 functional-753218 kubelet[14925]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:34:54 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:34:54 functional-753218 kubelet[14925]: E1002 20:34:54.573624   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (278.6785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (241.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-753218 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-753218 replace --force -f testdata/mysql.yaml: exit status 1 (55.094514ms)

                                                
                                                
** stderr ** 
	E1002 20:30:51.892073   53755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:30:51.892529   53755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-753218 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (336.696807ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-753218 logs -n 25: (1.062240014s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ cache   │ functional-753218 cache reload                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ ssh     │ functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │ 02 Oct 25 20:18 UTC │
	│ kubectl │ functional-753218 kubectl -- --context functional-753218 get pods                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ start   │ -p functional-753218 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:18 UTC │                     │
	│ license │                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ config  │ functional-753218 config unset cpus                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh sudo systemctl is-active docker                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ config  │ functional-753218 config get cpus                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ config  │ functional-753218 config set cpus 2                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ config  │ functional-753218 config get cpus                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ config  │ functional-753218 config unset cpus                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ addons  │ functional-753218 addons list                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ config  │ functional-753218 config get cpus                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh sudo systemctl is-active containerd                                                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ addons  │ functional-753218 addons list -o json                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh sudo cat /etc/ssl/certs/12851.pem                                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh sudo cat /etc/test/nested/copy/12851/hosts                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ image   │ functional-753218 image load --daemon kicbase/echo-server:functional-753218 --alsologtostderr            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh sudo cat /usr/share/ca-certificates/12851.pem                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ cp      │ functional-753218 cp testdata/cp-test.txt /home/docker/cp-test.txt                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh sudo cat /etc/ssl/certs/51391683.0                                                 │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh -n functional-753218 sudo cat /home/docker/cp-test.txt                             │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:18:34
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:18:34.206207   39074 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:18:34.206493   39074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:34.206497   39074 out.go:374] Setting ErrFile to fd 2...
	I1002 20:18:34.206500   39074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:18:34.206690   39074 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:18:34.207119   39074 out.go:368] Setting JSON to false
	I1002 20:18:34.208025   39074 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3663,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:18:34.208099   39074 start.go:140] virtualization: kvm guest
	I1002 20:18:34.211076   39074 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:18:34.212342   39074 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:18:34.212345   39074 notify.go:221] Checking for updates...
	I1002 20:18:34.213685   39074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:18:34.214912   39074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:18:34.216075   39074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:18:34.217175   39074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:18:34.218365   39074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:18:34.219862   39074 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:18:34.219970   39074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:18:34.243293   39074 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:18:34.243370   39074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:34.294846   39074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:18:34.285071909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:18:34.294933   39074 docker.go:319] overlay module found
	I1002 20:18:34.296853   39074 out.go:179] * Using the docker driver based on existing profile
	I1002 20:18:34.297994   39074 start.go:306] selected driver: docker
	I1002 20:18:34.298010   39074 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:34.298070   39074 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:18:34.298154   39074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:18:34.347576   39074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:18:34.338434102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:18:34.348199   39074 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:18:34.348218   39074 cni.go:84] Creating CNI manager for ""
	I1002 20:18:34.348268   39074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:34.348308   39074 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:34.350240   39074 out.go:179] * Starting "functional-753218" primary control-plane node in "functional-753218" cluster
	I1002 20:18:34.351573   39074 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:18:34.353042   39074 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:18:34.354380   39074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:34.354407   39074 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:18:34.354414   39074 cache.go:59] Caching tarball of preloaded images
	I1002 20:18:34.354480   39074 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:18:34.354514   39074 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:18:34.354521   39074 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:18:34.354600   39074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/config.json ...
	I1002 20:18:34.373723   39074 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:18:34.373737   39074 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:18:34.373750   39074 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:18:34.373779   39074 start.go:361] acquireMachinesLock for functional-753218: {Name:mk742badf6f1dbafca8397e398143e758831ae3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:18:34.373825   39074 start.go:365] duration metric: took 33.687µs to acquireMachinesLock for "functional-753218"
	I1002 20:18:34.373838   39074 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:18:34.373845   39074 fix.go:55] fixHost starting: 
	I1002 20:18:34.374037   39074 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:18:34.391194   39074 fix.go:113] recreateIfNeeded on functional-753218: state=Running err=<nil>
	W1002 20:18:34.391212   39074 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:18:34.393102   39074 out.go:252] * Updating the running docker "functional-753218" container ...
	I1002 20:18:34.393135   39074 machine.go:93] provisionDockerMachine start ...
	I1002 20:18:34.393196   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.410850   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.411066   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.411072   39074 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:18:34.552329   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:18:34.552359   39074 ubuntu.go:182] provisioning hostname "functional-753218"
	I1002 20:18:34.552416   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.570052   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.570307   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.570319   39074 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753218 && echo "functional-753218" | sudo tee /etc/hostname
	I1002 20:18:34.721441   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753218
	
	I1002 20:18:34.721512   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:34.738897   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:34.739113   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:34.739125   39074 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753218/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:18:34.881059   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:18:34.881084   39074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:18:34.881113   39074 ubuntu.go:190] setting up certificates
	I1002 20:18:34.881121   39074 provision.go:84] configureAuth start
	I1002 20:18:34.881164   39074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:18:34.899501   39074 provision.go:143] copyHostCerts
	I1002 20:18:34.899560   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:18:34.899574   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:18:34.899678   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:18:34.899811   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:18:34.899820   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:18:34.899861   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:18:34.899952   39074 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:18:34.899957   39074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:18:34.899992   39074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:18:34.900070   39074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.functional-753218 san=[127.0.0.1 192.168.49.2 functional-753218 localhost minikube]
	I1002 20:18:35.209717   39074 provision.go:177] copyRemoteCerts
	I1002 20:18:35.209761   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:18:35.209800   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.226488   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.326447   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:18:35.342793   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:18:35.359162   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:18:35.375197   39074 provision.go:87] duration metric: took 494.066038ms to configureAuth
	I1002 20:18:35.375214   39074 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:18:35.375353   39074 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:18:35.375460   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.392271   39074 main.go:141] libmachine: Using SSH client type: native
	I1002 20:18:35.392535   39074 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:18:35.392555   39074 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:18:35.662001   39074 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:18:35.662017   39074 machine.go:96] duration metric: took 1.268875772s to provisionDockerMachine
	I1002 20:18:35.662029   39074 start.go:294] postStartSetup for "functional-753218" (driver="docker")
	I1002 20:18:35.662042   39074 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:18:35.662106   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:18:35.662147   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.679558   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.779752   39074 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:18:35.783115   39074 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:18:35.783131   39074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:18:35.783153   39074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:18:35.783280   39074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:18:35.783385   39074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:18:35.783488   39074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts -> hosts in /etc/test/nested/copy/12851
	I1002 20:18:35.783529   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/12851
	I1002 20:18:35.791362   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:18:35.807703   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts --> /etc/test/nested/copy/12851/hosts (40 bytes)
	I1002 20:18:35.824578   39074 start.go:297] duration metric: took 162.536937ms for postStartSetup
	I1002 20:18:35.824707   39074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:18:35.824741   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.842117   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.939428   39074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:18:35.943787   39074 fix.go:57] duration metric: took 1.569934708s for fixHost
	I1002 20:18:35.943804   39074 start.go:84] releasing machines lock for "functional-753218", held for 1.569972452s
	I1002 20:18:35.943864   39074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753218
	I1002 20:18:35.960772   39074 ssh_runner.go:195] Run: cat /version.json
	I1002 20:18:35.960815   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.960859   39074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:18:35.960900   39074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:18:35.978069   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:35.978425   39074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:18:36.126122   39074 ssh_runner.go:195] Run: systemctl --version
	I1002 20:18:36.132369   39074 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:18:36.165368   39074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:18:36.169751   39074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:18:36.169819   39074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:18:36.177394   39074 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:18:36.177405   39074 start.go:496] detecting cgroup driver to use...
	I1002 20:18:36.177434   39074 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:18:36.177487   39074 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:18:36.191941   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:18:36.203333   39074 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:18:36.203390   39074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:18:36.216968   39074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:18:36.228214   39074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:18:36.308949   39074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:18:36.392928   39074 docker.go:234] disabling docker service ...
	I1002 20:18:36.392976   39074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:18:36.406808   39074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:18:36.418402   39074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:18:36.501067   39074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:18:36.583824   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:18:36.595669   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:18:36.609110   39074 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:18:36.609154   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.617194   39074 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:18:36.617240   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.625324   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.633155   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.641048   39074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:18:36.648837   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.656786   39074 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.664478   39074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:18:36.672362   39074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:18:36.678936   39074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:18:36.685474   39074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:18:36.766185   39074 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:18:36.872474   39074 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:18:36.872521   39074 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:18:36.876161   39074 start.go:564] Will wait 60s for crictl version
	I1002 20:18:36.876199   39074 ssh_runner.go:195] Run: which crictl
	I1002 20:18:36.879320   39074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:18:36.901521   39074 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:18:36.901576   39074 ssh_runner.go:195] Run: crio --version
	I1002 20:18:36.927454   39074 ssh_runner.go:195] Run: crio --version
	I1002 20:18:36.955669   39074 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:18:36.956820   39074 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:18:36.973453   39074 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:18:36.979247   39074 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:18:36.980537   39074 kubeadm.go:883] updating cluster {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:18:36.980633   39074 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:18:36.980707   39074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:18:37.012555   39074 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:18:37.012566   39074 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:18:37.012602   39074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:18:37.037114   39074 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:18:37.037125   39074 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:18:37.037130   39074 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:18:37.037235   39074 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:18:37.037301   39074 ssh_runner.go:195] Run: crio config
	I1002 20:18:37.080633   39074 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:18:37.080675   39074 cni.go:84] Creating CNI manager for ""
	I1002 20:18:37.080685   39074 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:18:37.080697   39074 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:18:37.080715   39074 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753218 NodeName:functional-753218 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:18:37.080819   39074 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753218"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:18:37.080866   39074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:18:37.088458   39074 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:18:37.088499   39074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:18:37.095835   39074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:18:37.107722   39074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:18:37.119278   39074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 20:18:37.130821   39074 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:18:37.134590   39074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:18:37.217285   39074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:18:37.229402   39074 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218 for IP: 192.168.49.2
	I1002 20:18:37.229423   39074 certs.go:195] generating shared ca certs ...
	I1002 20:18:37.229445   39074 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:18:37.229580   39074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:18:37.229612   39074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:18:37.229635   39074 certs.go:257] generating profile certs ...
	I1002 20:18:37.229744   39074 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.key
	I1002 20:18:37.229781   39074 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key.2c64f804
	I1002 20:18:37.229820   39074 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key
	I1002 20:18:37.229920   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:18:37.229944   39074 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:18:37.229949   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:18:37.229969   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:18:37.229988   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:18:37.230004   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:18:37.230036   39074 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:18:37.230546   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:18:37.247164   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:18:37.262985   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:18:37.279026   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:18:37.294907   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:18:37.311017   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:18:37.326759   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:18:37.342531   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:18:37.358985   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:18:37.375049   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:18:37.390853   39074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:18:37.406776   39074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:18:37.418137   39074 ssh_runner.go:195] Run: openssl version
	I1002 20:18:37.423758   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:18:37.431400   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.434759   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.434796   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:18:37.469193   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:18:37.476976   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:18:37.484860   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.488438   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.488489   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:18:37.521688   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:18:37.529613   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:18:37.537558   39074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.541046   39074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.541078   39074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:18:37.574961   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:18:37.582802   39074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:18:37.586377   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:18:37.620185   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:18:37.653623   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:18:37.686983   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:18:37.720317   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:18:37.753617   39074 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:18:37.787371   39074 kubeadm.go:400] StartCluster: {Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:18:37.787431   39074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:18:37.787474   39074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:18:37.813804   39074 cri.go:89] found id: ""
	I1002 20:18:37.813849   39074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:18:37.821398   39074 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:18:37.821423   39074 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:18:37.821468   39074 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:18:37.828438   39074 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.828913   39074 kubeconfig.go:125] found "functional-753218" server: "https://192.168.49.2:8441"
	I1002 20:18:37.830019   39074 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:18:37.837252   39074 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:04:06.241851372 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:18:37.128983250 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:18:37.837272   39074 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:18:37.837284   39074 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:18:37.837326   39074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:18:37.863302   39074 cri.go:89] found id: ""
	I1002 20:18:37.863361   39074 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:18:37.911147   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:18:37.918894   39074 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  2 20:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 20:08 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:08 /etc/kubernetes/scheduler.conf
	
	I1002 20:18:37.918950   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:18:37.926065   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:18:37.933031   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.933065   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:18:37.939972   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:18:37.946875   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.946911   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:18:37.953620   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:18:37.960544   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:18:37.960573   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:18:37.967317   39074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:18:37.974311   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:38.013321   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.074022   39074 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.060677583s)
	I1002 20:18:39.074075   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.228791   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.281116   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:18:39.328956   39074 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:18:39.329020   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:39.829304   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:40.329782   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:40.830022   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:41.329834   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:41.829218   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:42.329847   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:42.829333   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:43.329809   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:43.829522   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:44.329493   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:44.829576   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:45.329166   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:45.829738   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:46.329491   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:46.829212   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:47.330127   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:47.829175   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:48.329888   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:48.829745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:49.330019   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:49.829990   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:50.330054   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:50.829373   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:51.330102   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:51.829486   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:52.329898   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:52.829160   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:53.329735   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:53.829783   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:54.329822   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:54.829468   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:55.329274   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:55.829515   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:56.329151   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:56.829940   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:57.329721   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:57.829433   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:58.329165   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:58.829113   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:59.329101   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:18:59.829897   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:00.329742   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:00.829770   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:01.329988   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:01.830082   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:02.329237   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:02.829922   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:03.330132   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:03.829921   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:04.329162   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:04.829124   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:05.329748   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:05.829595   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:06.329426   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:06.829387   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:07.329567   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:07.830080   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:08.329899   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:08.829745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:09.329666   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:09.829758   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:10.329818   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:10.829090   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:11.329880   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:11.829546   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:12.329286   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:12.830050   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:13.329756   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:13.829521   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:14.329346   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:14.829881   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:15.329641   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:15.829463   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:16.329288   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:16.829123   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:17.329880   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:17.829643   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:18.329839   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:18.829576   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:19.329600   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:19.829397   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:20.329443   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:20.829214   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:21.329827   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:21.829216   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:22.329884   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:22.829410   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:23.329734   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:23.829124   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:24.330092   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:24.829862   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:25.329373   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:25.829486   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:26.329987   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:26.829953   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:27.330064   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:27.829775   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:28.329834   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:28.829394   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:29.329185   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:29.829478   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:30.329460   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:30.829312   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:31.330076   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:31.829866   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:32.329434   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:32.829588   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:33.329475   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:33.829203   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:34.329105   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:34.829918   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:35.329741   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:35.829625   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:36.329350   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:36.829147   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:37.329144   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:37.829141   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:38.329884   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:38.829677   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:39.329725   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:39.329777   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:39.355028   39074 cri.go:89] found id: ""
	I1002 20:19:39.355041   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.355048   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:39.355053   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:39.355092   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:39.380001   39074 cri.go:89] found id: ""
	I1002 20:19:39.380017   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.380026   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:39.380031   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:39.380090   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:39.405251   39074 cri.go:89] found id: ""
	I1002 20:19:39.405267   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.405273   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:39.405277   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:39.405321   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:39.430719   39074 cri.go:89] found id: ""
	I1002 20:19:39.430732   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.430739   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:39.430745   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:39.430794   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:39.454916   39074 cri.go:89] found id: ""
	I1002 20:19:39.454929   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.454936   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:39.454940   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:39.454981   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:39.478922   39074 cri.go:89] found id: ""
	I1002 20:19:39.478934   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.478940   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:39.478944   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:39.478983   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:39.503714   39074 cri.go:89] found id: ""
	I1002 20:19:39.503731   39074 logs.go:282] 0 containers: []
	W1002 20:19:39.503739   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:39.503749   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:39.503760   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:39.573887   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:39.573907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:39.585174   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:39.585191   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:39.639301   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:39.632705    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.633125    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.634665    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.635127    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.636638    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:39.632705    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.633125    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.634665    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.635127    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:39.636638    6685 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:39.639313   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:39.639322   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:39.699438   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:39.699455   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:42.228926   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:42.239185   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:42.239234   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:42.263214   39074 cri.go:89] found id: ""
	I1002 20:19:42.263230   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.263238   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:42.263245   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:42.263288   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:42.286996   39074 cri.go:89] found id: ""
	I1002 20:19:42.287009   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.287014   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:42.287019   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:42.287059   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:42.311539   39074 cri.go:89] found id: ""
	I1002 20:19:42.311555   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.311563   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:42.311568   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:42.311608   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:42.335720   39074 cri.go:89] found id: ""
	I1002 20:19:42.335735   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.335740   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:42.335744   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:42.335789   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:42.359620   39074 cri.go:89] found id: ""
	I1002 20:19:42.359635   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.359642   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:42.359658   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:42.359717   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:42.383670   39074 cri.go:89] found id: ""
	I1002 20:19:42.383684   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.383702   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:42.383708   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:42.383752   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:42.409324   39074 cri.go:89] found id: ""
	I1002 20:19:42.409337   39074 logs.go:282] 0 containers: []
	W1002 20:19:42.409343   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:42.409350   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:42.409358   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:42.463480   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:42.456002    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.456468    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.458629    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.459138    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.460809    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:42.456002    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.456468    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.458629    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.459138    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:42.460809    6802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:42.463498   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:42.463508   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:42.522978   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:42.522994   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:42.550529   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:42.550544   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:42.618426   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:42.618446   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:45.130475   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:45.140935   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:45.140984   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:45.166296   39074 cri.go:89] found id: ""
	I1002 20:19:45.166307   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.166313   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:45.166318   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:45.166370   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:45.190669   39074 cri.go:89] found id: ""
	I1002 20:19:45.190684   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.190690   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:45.190694   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:45.190748   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:45.215836   39074 cri.go:89] found id: ""
	I1002 20:19:45.215861   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.215866   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:45.215870   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:45.215911   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:45.240020   39074 cri.go:89] found id: ""
	I1002 20:19:45.240032   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.240037   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:45.240054   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:45.240103   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:45.265411   39074 cri.go:89] found id: ""
	I1002 20:19:45.265424   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.265430   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:45.265434   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:45.265482   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:45.289247   39074 cri.go:89] found id: ""
	I1002 20:19:45.289262   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.289272   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:45.289277   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:45.289327   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:45.313127   39074 cri.go:89] found id: ""
	I1002 20:19:45.313142   39074 logs.go:282] 0 containers: []
	W1002 20:19:45.313149   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:45.313157   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:45.313175   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:45.383170   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:45.383189   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:45.394492   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:45.394506   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:45.448758   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:45.441841    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.442413    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.443998    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.444386    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.445933    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:45.441841    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.442413    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.443998    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.444386    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:45.445933    6938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:45.448771   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:45.448780   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:45.512497   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:45.512515   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:48.041482   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:48.051591   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:48.051635   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:48.076424   39074 cri.go:89] found id: ""
	I1002 20:19:48.076441   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.076449   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:48.076454   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:48.076499   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:48.100297   39074 cri.go:89] found id: ""
	I1002 20:19:48.100324   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.100330   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:48.100334   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:48.100378   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:48.124828   39074 cri.go:89] found id: ""
	I1002 20:19:48.124845   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.124854   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:48.124860   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:48.124916   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:48.148977   39074 cri.go:89] found id: ""
	I1002 20:19:48.148991   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.148998   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:48.149002   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:48.149045   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:48.172962   39074 cri.go:89] found id: ""
	I1002 20:19:48.172978   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.172987   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:48.172992   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:48.173078   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:48.196028   39074 cri.go:89] found id: ""
	I1002 20:19:48.196047   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.196056   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:48.196063   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:48.196116   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:48.219489   39074 cri.go:89] found id: ""
	I1002 20:19:48.219506   39074 logs.go:282] 0 containers: []
	W1002 20:19:48.219514   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:48.219524   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:48.219535   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:48.285750   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:48.285767   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:48.296759   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:48.296773   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:48.350552   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:48.343634    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.344266    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.345849    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.346274    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.347827    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:48.343634    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.344266    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.345849    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.346274    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:48.347827    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:48.350562   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:48.350570   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:48.415152   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:48.415174   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:50.944831   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:50.955007   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:50.955051   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:50.979562   39074 cri.go:89] found id: ""
	I1002 20:19:50.979574   39074 logs.go:282] 0 containers: []
	W1002 20:19:50.979580   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:50.979586   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:50.979626   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:51.005726   39074 cri.go:89] found id: ""
	I1002 20:19:51.005738   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.005744   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:51.005748   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:51.005789   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:51.029734   39074 cri.go:89] found id: ""
	I1002 20:19:51.029751   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.029760   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:51.029766   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:51.029810   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:51.053889   39074 cri.go:89] found id: ""
	I1002 20:19:51.053904   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.053912   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:51.053918   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:51.053970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:51.080377   39074 cri.go:89] found id: ""
	I1002 20:19:51.080389   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.080394   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:51.080399   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:51.080438   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:51.105307   39074 cri.go:89] found id: ""
	I1002 20:19:51.105321   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.105326   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:51.105331   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:51.105371   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:51.130666   39074 cri.go:89] found id: ""
	I1002 20:19:51.130682   39074 logs.go:282] 0 containers: []
	W1002 20:19:51.130689   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:51.130700   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:51.130710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:51.141518   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:51.141533   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:51.194182   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:51.187772    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.188306    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.189890    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.190325    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.191812    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:51.187772    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.188306    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.189890    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.190325    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:51.191812    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:51.194195   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:51.194204   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:51.253875   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:51.253894   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:51.281673   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:51.281693   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:53.847012   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:53.857350   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:53.857394   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:53.882278   39074 cri.go:89] found id: ""
	I1002 20:19:53.882291   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.882297   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:53.882309   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:53.882351   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:53.906222   39074 cri.go:89] found id: ""
	I1002 20:19:53.906235   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.906241   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:53.906245   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:53.906294   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:53.930975   39074 cri.go:89] found id: ""
	I1002 20:19:53.930988   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.930995   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:53.930999   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:53.931045   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:53.957875   39074 cri.go:89] found id: ""
	I1002 20:19:53.957891   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.957901   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:53.957907   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:53.958019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:53.982116   39074 cri.go:89] found id: ""
	I1002 20:19:53.982129   39074 logs.go:282] 0 containers: []
	W1002 20:19:53.982135   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:53.982140   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:53.982181   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:54.006296   39074 cri.go:89] found id: ""
	I1002 20:19:54.006310   39074 logs.go:282] 0 containers: []
	W1002 20:19:54.006316   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:54.006320   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:54.006360   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:54.031088   39074 cri.go:89] found id: ""
	I1002 20:19:54.031102   39074 logs.go:282] 0 containers: []
	W1002 20:19:54.031108   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:54.031116   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:54.031125   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:54.041909   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:54.041951   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:54.095399   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:54.088843    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.089263    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.090810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.091232    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.092782    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:54.088843    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.089263    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.090810    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.091232    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:54.092782    7296 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:54.095411   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:54.095438   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:54.159991   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:54.160010   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:54.187642   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:54.187676   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:56.757287   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:56.768252   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:56.768293   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:56.793773   39074 cri.go:89] found id: ""
	I1002 20:19:56.793785   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.793791   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:56.793796   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:56.793841   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:56.819484   39074 cri.go:89] found id: ""
	I1002 20:19:56.819499   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.819509   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:56.819516   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:56.819558   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:56.844773   39074 cri.go:89] found id: ""
	I1002 20:19:56.844787   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.844793   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:56.844798   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:56.844838   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:56.869847   39074 cri.go:89] found id: ""
	I1002 20:19:56.869888   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.869898   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:56.869906   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:56.869956   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:56.894519   39074 cri.go:89] found id: ""
	I1002 20:19:56.894537   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.894545   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:56.894553   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:56.894613   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:56.920670   39074 cri.go:89] found id: ""
	I1002 20:19:56.920689   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.920698   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:56.920706   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:56.920758   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:56.945515   39074 cri.go:89] found id: ""
	I1002 20:19:56.945529   39074 logs.go:282] 0 containers: []
	W1002 20:19:56.945535   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:56.945543   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:19:56.945557   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:19:57.001311   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:19:56.994723    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.995244    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.996779    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.997235    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.998722    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:19:56.994723    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.995244    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.996779    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.997235    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:19:56.998722    7412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:19:57.001323   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:57.001332   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:57.065838   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:57.065856   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:57.093387   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:57.093401   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:19:57.161709   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:19:57.161730   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:19:59.673972   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:19:59.684279   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:19:59.684321   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:19:59.708892   39074 cri.go:89] found id: ""
	I1002 20:19:59.708905   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.708911   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:19:59.708915   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:19:59.708958   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:19:59.733806   39074 cri.go:89] found id: ""
	I1002 20:19:59.733821   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.733828   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:19:59.733834   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:19:59.733886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:19:59.758895   39074 cri.go:89] found id: ""
	I1002 20:19:59.758907   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.758913   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:19:59.758918   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:19:59.758970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:19:59.782140   39074 cri.go:89] found id: ""
	I1002 20:19:59.782154   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.782161   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:19:59.782166   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:19:59.782211   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:19:59.806783   39074 cri.go:89] found id: ""
	I1002 20:19:59.806797   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.806803   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:19:59.806808   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:19:59.806851   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:19:59.831636   39074 cri.go:89] found id: ""
	I1002 20:19:59.831663   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.831673   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:19:59.831679   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:19:59.831725   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:19:59.855094   39074 cri.go:89] found id: ""
	I1002 20:19:59.855110   39074 logs.go:282] 0 containers: []
	W1002 20:19:59.855119   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:19:59.855128   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:19:59.855139   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:19:59.916579   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:19:59.916598   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:19:59.944216   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:19:59.944230   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:00.010694   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:00.010712   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:00.021993   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:00.022008   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:00.076257   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:00.069139    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.069711    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071246    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071701    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.073412    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:00.069139    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.069711    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071246    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.071701    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:00.073412    7565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:02.577956   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:02.588476   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:02.588521   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:02.612197   39074 cri.go:89] found id: ""
	I1002 20:20:02.612213   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.612224   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:02.612231   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:02.612283   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:02.636711   39074 cri.go:89] found id: ""
	I1002 20:20:02.636727   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.636737   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:02.636743   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:02.636797   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:02.660364   39074 cri.go:89] found id: ""
	I1002 20:20:02.660380   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.660389   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:02.660396   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:02.660448   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:02.684665   39074 cri.go:89] found id: ""
	I1002 20:20:02.684682   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.684689   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:02.684694   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:02.684739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:02.710226   39074 cri.go:89] found id: ""
	I1002 20:20:02.710239   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.710247   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:02.710254   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:02.710308   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:02.735247   39074 cri.go:89] found id: ""
	I1002 20:20:02.735262   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.735271   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:02.735278   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:02.735328   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:02.760072   39074 cri.go:89] found id: ""
	I1002 20:20:02.760085   39074 logs.go:282] 0 containers: []
	W1002 20:20:02.760091   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:02.760098   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:02.760106   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:02.824182   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:02.824200   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:02.835284   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:02.835297   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:02.888320   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:02.881490    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.881999    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883536    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883961    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.885446    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:02.881490    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.881999    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883536    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.883961    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:02.885446    7669 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:02.888330   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:02.888339   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:02.952125   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:02.952145   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:05.481086   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:05.491660   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:05.491723   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:05.517036   39074 cri.go:89] found id: ""
	I1002 20:20:05.517052   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.517060   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:05.517067   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:05.517114   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:05.542299   39074 cri.go:89] found id: ""
	I1002 20:20:05.542312   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.542320   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:05.542326   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:05.542387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:05.567213   39074 cri.go:89] found id: ""
	I1002 20:20:05.567227   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.567233   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:05.567238   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:05.567286   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:05.590782   39074 cri.go:89] found id: ""
	I1002 20:20:05.590795   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.590801   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:05.590807   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:05.590850   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:05.615825   39074 cri.go:89] found id: ""
	I1002 20:20:05.615837   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.615843   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:05.615849   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:05.615886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:05.640124   39074 cri.go:89] found id: ""
	I1002 20:20:05.640137   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.640143   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:05.640148   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:05.640191   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:05.664435   39074 cri.go:89] found id: ""
	I1002 20:20:05.664451   39074 logs.go:282] 0 containers: []
	W1002 20:20:05.664460   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:05.664469   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:05.664478   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:05.675270   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:05.675284   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:05.728958   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:05.722310    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.722829    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724378    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724835    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.726322    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:05.722310    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.722829    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724378    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.724835    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:05.726322    7790 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:05.728968   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:05.728977   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:05.789744   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:05.789763   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:05.816871   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:05.816886   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:08.386603   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:08.396838   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:08.396887   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:08.421504   39074 cri.go:89] found id: ""
	I1002 20:20:08.421516   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.421526   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:08.421531   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:08.421573   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:08.445525   39074 cri.go:89] found id: ""
	I1002 20:20:08.445539   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.445551   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:08.445557   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:08.445611   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:08.473912   39074 cri.go:89] found id: ""
	I1002 20:20:08.473926   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.473932   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:08.473937   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:08.473977   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:08.498551   39074 cri.go:89] found id: ""
	I1002 20:20:08.498567   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.498575   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:08.498579   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:08.498619   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:08.522969   39074 cri.go:89] found id: ""
	I1002 20:20:08.522985   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.522991   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:08.522996   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:08.523041   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:08.546557   39074 cri.go:89] found id: ""
	I1002 20:20:08.546572   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.546579   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:08.546583   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:08.546628   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:08.570570   39074 cri.go:89] found id: ""
	I1002 20:20:08.570586   39074 logs.go:282] 0 containers: []
	W1002 20:20:08.570595   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:08.570605   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:08.570619   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:08.639672   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:08.639691   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:08.651327   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:08.651345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:08.704679   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:08.698150    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.698634    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700211    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700630    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.702066    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:08.698150    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.698634    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700211    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.700630    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:08.702066    7915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:08.704698   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:08.704710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:08.767857   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:08.767876   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:11.297723   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:11.307921   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:11.307963   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:11.337544   39074 cri.go:89] found id: ""
	I1002 20:20:11.337560   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.337577   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:11.337584   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:11.337640   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:11.363291   39074 cri.go:89] found id: ""
	I1002 20:20:11.363306   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.363315   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:11.363325   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:11.363366   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:11.387886   39074 cri.go:89] found id: ""
	I1002 20:20:11.387905   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.387915   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:11.387922   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:11.387972   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:11.412550   39074 cri.go:89] found id: ""
	I1002 20:20:11.412565   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.412573   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:11.412579   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:11.412677   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:11.437380   39074 cri.go:89] found id: ""
	I1002 20:20:11.437396   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.437405   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:11.437411   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:11.437452   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:11.461402   39074 cri.go:89] found id: ""
	I1002 20:20:11.461415   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.461421   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:11.461426   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:11.461471   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:11.486814   39074 cri.go:89] found id: ""
	I1002 20:20:11.486828   39074 logs.go:282] 0 containers: []
	W1002 20:20:11.486833   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:11.486840   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:11.486848   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:11.497776   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:11.497791   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:11.552252   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:11.545574    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.546102    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.547700    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.548151    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.549707    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:11.545574    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.546102    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.547700    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.548151    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:11.549707    8035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:11.552263   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:11.552278   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:11.614501   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:11.614519   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:11.641975   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:11.641990   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:14.212363   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:14.223339   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:14.223387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:14.247765   39074 cri.go:89] found id: ""
	I1002 20:20:14.247782   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.247790   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:14.247796   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:14.247850   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:14.272207   39074 cri.go:89] found id: ""
	I1002 20:20:14.272223   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.272230   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:14.272235   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:14.272275   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:14.296884   39074 cri.go:89] found id: ""
	I1002 20:20:14.296896   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.296901   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:14.296906   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:14.296953   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:14.322400   39074 cri.go:89] found id: ""
	I1002 20:20:14.322416   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.322424   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:14.322430   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:14.322483   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:14.348457   39074 cri.go:89] found id: ""
	I1002 20:20:14.348474   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.348482   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:14.348488   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:14.348529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:14.371846   39074 cri.go:89] found id: ""
	I1002 20:20:14.371859   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.371866   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:14.371870   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:14.371910   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:14.396739   39074 cri.go:89] found id: ""
	I1002 20:20:14.396757   39074 logs.go:282] 0 containers: []
	W1002 20:20:14.396765   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:14.396775   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:14.396785   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:14.461682   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:14.461703   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:14.473125   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:14.473138   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:14.527220   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:14.520100    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.520639    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522150    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522547    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.524758    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:14.520100    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.520639    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522150    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.522547    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:14.524758    8160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:14.527230   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:14.527243   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:14.587080   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:14.587097   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:17.117171   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:17.127800   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:17.127860   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:17.153825   39074 cri.go:89] found id: ""
	I1002 20:20:17.153838   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.153845   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:17.153850   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:17.153890   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:17.179191   39074 cri.go:89] found id: ""
	I1002 20:20:17.179208   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.179218   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:17.179225   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:17.179283   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:17.203643   39074 cri.go:89] found id: ""
	I1002 20:20:17.203670   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.203677   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:17.203682   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:17.203729   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:17.228485   39074 cri.go:89] found id: ""
	I1002 20:20:17.228500   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.228509   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:17.228513   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:17.228552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:17.254499   39074 cri.go:89] found id: ""
	I1002 20:20:17.254513   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.254519   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:17.254524   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:17.254568   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:17.280943   39074 cri.go:89] found id: ""
	I1002 20:20:17.280959   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.280968   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:17.280975   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:17.281022   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:17.306591   39074 cri.go:89] found id: ""
	I1002 20:20:17.306607   39074 logs.go:282] 0 containers: []
	W1002 20:20:17.306615   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:17.306624   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:17.306638   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:17.365595   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:17.358275    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359542    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359993    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.361559    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.362067    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:17.358275    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359542    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.359993    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.361559    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:17.362067    8273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:17.365605   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:17.365615   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:17.428722   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:17.428741   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:17.456720   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:17.456736   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:17.526400   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:17.526419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:20.038675   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:20.049608   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:20.049670   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:20.075162   39074 cri.go:89] found id: ""
	I1002 20:20:20.075178   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.075193   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:20.075200   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:20.075244   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:20.100714   39074 cri.go:89] found id: ""
	I1002 20:20:20.100730   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.100739   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:20.100745   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:20.100796   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:20.125515   39074 cri.go:89] found id: ""
	I1002 20:20:20.125530   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.125536   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:20.125541   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:20.125590   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:20.150152   39074 cri.go:89] found id: ""
	I1002 20:20:20.150166   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.150172   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:20.150176   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:20.150219   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:20.174386   39074 cri.go:89] found id: ""
	I1002 20:20:20.174400   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.174405   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:20.174410   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:20.174451   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:20.198954   39074 cri.go:89] found id: ""
	I1002 20:20:20.198967   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.198974   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:20.198978   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:20.199019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:20.223494   39074 cri.go:89] found id: ""
	I1002 20:20:20.223506   39074 logs.go:282] 0 containers: []
	W1002 20:20:20.223512   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:20.223520   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:20.223530   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:20.234227   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:20.234242   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:20.287508   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:20.281135    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.281556    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283225    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283624    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.285109    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:20.281135    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.281556    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283225    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.283624    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:20.285109    8402 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:20.287521   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:20.287530   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:20.353299   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:20.353316   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:20.381247   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:20.381264   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:22.948641   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:22.958867   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:22.958923   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:22.982867   39074 cri.go:89] found id: ""
	I1002 20:20:22.982888   39074 logs.go:282] 0 containers: []
	W1002 20:20:22.982896   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:22.982905   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:22.982963   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:23.008002   39074 cri.go:89] found id: ""
	I1002 20:20:23.008019   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.008025   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:23.008031   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:23.008102   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:23.032729   39074 cri.go:89] found id: ""
	I1002 20:20:23.032745   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.032755   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:23.032761   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:23.032805   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:23.057489   39074 cri.go:89] found id: ""
	I1002 20:20:23.057506   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.057513   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:23.057520   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:23.057574   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:23.082449   39074 cri.go:89] found id: ""
	I1002 20:20:23.082465   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.082473   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:23.082480   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:23.082533   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:23.106284   39074 cri.go:89] found id: ""
	I1002 20:20:23.106300   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.106308   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:23.106314   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:23.106356   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:23.131674   39074 cri.go:89] found id: ""
	I1002 20:20:23.131689   39074 logs.go:282] 0 containers: []
	W1002 20:20:23.131698   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:23.131708   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:23.131719   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:23.202584   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:23.202606   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:23.213553   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:23.213567   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:23.267093   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:23.260296    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.260752    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262302    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262721    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.264215    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:23.260296    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.260752    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262302    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.262721    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:23.264215    8529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:23.267107   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:23.267117   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:23.330039   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:23.330057   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:25.859757   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:25.870050   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:25.870094   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:25.893890   39074 cri.go:89] found id: ""
	I1002 20:20:25.893903   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.893909   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:25.893913   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:25.893962   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:25.918711   39074 cri.go:89] found id: ""
	I1002 20:20:25.918724   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.918731   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:25.918740   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:25.918790   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:25.943028   39074 cri.go:89] found id: ""
	I1002 20:20:25.943040   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.943046   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:25.943050   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:25.943100   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:25.968555   39074 cri.go:89] found id: ""
	I1002 20:20:25.968569   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.968575   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:25.968580   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:25.968630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:25.993321   39074 cri.go:89] found id: ""
	I1002 20:20:25.993334   39074 logs.go:282] 0 containers: []
	W1002 20:20:25.993340   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:25.993344   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:25.993393   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:26.017729   39074 cri.go:89] found id: ""
	I1002 20:20:26.017755   39074 logs.go:282] 0 containers: []
	W1002 20:20:26.017761   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:26.017766   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:26.017807   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:26.042867   39074 cri.go:89] found id: ""
	I1002 20:20:26.042879   39074 logs.go:282] 0 containers: []
	W1002 20:20:26.042885   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:26.042892   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:26.042900   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:26.109498   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:26.109517   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:26.120700   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:26.120715   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:26.174158   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:26.167675    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.168158    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.169684    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.170006    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.171555    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:26.167675    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.168158    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.169684    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.170006    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:26.171555    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:26.174169   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:26.174177   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:26.232801   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:26.232820   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:28.760440   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:28.770974   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:28.771015   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:28.795071   39074 cri.go:89] found id: ""
	I1002 20:20:28.795084   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.795089   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:28.795094   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:28.795137   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:28.820101   39074 cri.go:89] found id: ""
	I1002 20:20:28.820114   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.820120   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:28.820125   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:28.820174   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:28.844954   39074 cri.go:89] found id: ""
	I1002 20:20:28.844967   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.844974   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:28.844978   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:28.845021   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:28.869971   39074 cri.go:89] found id: ""
	I1002 20:20:28.869984   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.869991   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:28.869996   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:28.870035   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:28.894419   39074 cri.go:89] found id: ""
	I1002 20:20:28.894434   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.894443   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:28.894454   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:28.894497   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:28.919785   39074 cri.go:89] found id: ""
	I1002 20:20:28.919798   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.919804   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:28.919808   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:28.919847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:28.945626   39074 cri.go:89] found id: ""
	I1002 20:20:28.945644   39074 logs.go:282] 0 containers: []
	W1002 20:20:28.945666   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:28.945676   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:28.945688   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:29.013406   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:29.013424   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:29.024733   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:29.024749   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:29.079492   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:29.073004    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.073547    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075195    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075620    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.077061    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:29.073004    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.073547    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075195    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.075620    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:29.077061    8774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:29.079501   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:29.079510   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:29.143375   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:29.143393   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:31.673342   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:31.683685   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:31.683744   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:31.708355   39074 cri.go:89] found id: ""
	I1002 20:20:31.708368   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.708374   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:31.708378   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:31.708426   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:31.732066   39074 cri.go:89] found id: ""
	I1002 20:20:31.732080   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.732085   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:31.732090   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:31.732128   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:31.756955   39074 cri.go:89] found id: ""
	I1002 20:20:31.756968   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.756975   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:31.756981   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:31.757031   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:31.783141   39074 cri.go:89] found id: ""
	I1002 20:20:31.783157   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.783163   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:31.783168   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:31.783209   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:31.807678   39074 cri.go:89] found id: ""
	I1002 20:20:31.807691   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.807698   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:31.807703   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:31.807745   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:31.831482   39074 cri.go:89] found id: ""
	I1002 20:20:31.831494   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.831500   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:31.831504   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:31.831548   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:31.855667   39074 cri.go:89] found id: ""
	I1002 20:20:31.855683   39074 logs.go:282] 0 containers: []
	W1002 20:20:31.855692   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:31.855700   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:31.855710   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:31.882380   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:31.882395   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:31.947814   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:31.947838   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:31.958919   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:31.958934   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:32.013721   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:32.006971    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.007473    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009037    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009432    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.010967    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:32.006971    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.007473    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009037    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.009432    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:32.010967    8913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:32.013731   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:32.013742   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:34.575751   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:34.585980   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:34.586030   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:34.610997   39074 cri.go:89] found id: ""
	I1002 20:20:34.611013   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.611019   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:34.611024   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:34.611076   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:34.635375   39074 cri.go:89] found id: ""
	I1002 20:20:34.635388   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.635394   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:34.635401   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:34.635449   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:34.659513   39074 cri.go:89] found id: ""
	I1002 20:20:34.659526   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.659532   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:34.659536   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:34.659584   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:34.683614   39074 cri.go:89] found id: ""
	I1002 20:20:34.683628   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.683634   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:34.683638   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:34.683709   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:34.707536   39074 cri.go:89] found id: ""
	I1002 20:20:34.707548   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.707554   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:34.707558   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:34.707606   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:34.730813   39074 cri.go:89] found id: ""
	I1002 20:20:34.730829   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.730838   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:34.730844   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:34.730886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:34.756746   39074 cri.go:89] found id: ""
	I1002 20:20:34.756758   39074 logs.go:282] 0 containers: []
	W1002 20:20:34.756763   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:34.756770   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:34.756779   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:34.823845   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:34.823864   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:34.834944   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:34.834959   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:34.889016   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:34.882235    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.882739    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884456    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884966    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.886550    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:34.882235    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.882739    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884456    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.884966    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:34.886550    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:34.889027   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:34.889039   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:34.952102   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:34.952120   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:37.482142   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:37.492739   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:37.492783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:37.518265   39074 cri.go:89] found id: ""
	I1002 20:20:37.518279   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.518285   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:37.518290   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:37.518332   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:37.544309   39074 cri.go:89] found id: ""
	I1002 20:20:37.544322   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.544327   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:37.544332   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:37.544371   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:37.568928   39074 cri.go:89] found id: ""
	I1002 20:20:37.568947   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.568955   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:37.568960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:37.569000   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:37.593112   39074 cri.go:89] found id: ""
	I1002 20:20:37.593125   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.593131   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:37.593135   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:37.593175   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:37.617378   39074 cri.go:89] found id: ""
	I1002 20:20:37.617393   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.617399   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:37.617404   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:37.617446   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:37.641497   39074 cri.go:89] found id: ""
	I1002 20:20:37.641509   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.641514   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:37.641519   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:37.641560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:37.665025   39074 cri.go:89] found id: ""
	I1002 20:20:37.665037   39074 logs.go:282] 0 containers: []
	W1002 20:20:37.665043   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:37.665050   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:37.665059   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:37.729867   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:37.729886   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:37.741144   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:37.741161   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:37.794545   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:37.788104    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.788618    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790124    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790578    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.791673    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:37.788104    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.788618    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790124    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.790578    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:37.791673    9135 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:37.794554   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:37.794563   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:37.858517   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:37.858537   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:40.387221   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:40.397406   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:40.397456   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:40.422226   39074 cri.go:89] found id: ""
	I1002 20:20:40.422241   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.422249   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:40.422256   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:40.422312   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:40.448898   39074 cri.go:89] found id: ""
	I1002 20:20:40.448914   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.448922   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:40.448928   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:40.448970   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:40.473866   39074 cri.go:89] found id: ""
	I1002 20:20:40.473883   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.473891   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:40.473898   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:40.473940   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:40.499789   39074 cri.go:89] found id: ""
	I1002 20:20:40.499804   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.499820   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:40.499827   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:40.499870   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:40.524055   39074 cri.go:89] found id: ""
	I1002 20:20:40.524070   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.524078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:40.524084   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:40.524131   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:40.549681   39074 cri.go:89] found id: ""
	I1002 20:20:40.549697   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.549705   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:40.549709   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:40.549751   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:40.574534   39074 cri.go:89] found id: ""
	I1002 20:20:40.574551   39074 logs.go:282] 0 containers: []
	W1002 20:20:40.574559   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:40.574568   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:40.574585   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:40.585332   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:40.585345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:40.639552   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:40.632883    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.633368    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.634904    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.635294    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.636825    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:40.632883    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.633368    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.634904    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.635294    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:40.636825    9269 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:40.639561   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:40.639570   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:40.703074   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:40.703093   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:40.731458   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:40.731471   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:43.302779   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:43.313194   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:43.313249   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:43.340348   39074 cri.go:89] found id: ""
	I1002 20:20:43.340361   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.340367   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:43.340372   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:43.340416   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:43.365438   39074 cri.go:89] found id: ""
	I1002 20:20:43.365453   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.365461   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:43.365467   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:43.365530   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:43.392295   39074 cri.go:89] found id: ""
	I1002 20:20:43.392308   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.392314   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:43.392319   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:43.392358   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:43.417313   39074 cri.go:89] found id: ""
	I1002 20:20:43.417326   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.417332   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:43.417336   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:43.417381   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:43.441890   39074 cri.go:89] found id: ""
	I1002 20:20:43.441907   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.441913   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:43.441917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:43.441959   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:43.467410   39074 cri.go:89] found id: ""
	I1002 20:20:43.467427   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.467438   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:43.467444   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:43.467501   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:43.492142   39074 cri.go:89] found id: ""
	I1002 20:20:43.492154   39074 logs.go:282] 0 containers: []
	W1002 20:20:43.492160   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:43.492168   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:43.492178   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:43.520876   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:43.520907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:43.586242   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:43.586258   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:43.597341   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:43.597355   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:43.651087   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:43.644558    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.645043    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.646588    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.647003    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.648460    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:43.644558    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.645043    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.646588    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.647003    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:43.648460    9407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:43.651098   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:43.651112   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:46.210362   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:46.220658   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:46.220710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:46.245577   39074 cri.go:89] found id: ""
	I1002 20:20:46.245591   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.245597   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:46.245601   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:46.245641   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:46.270950   39074 cri.go:89] found id: ""
	I1002 20:20:46.270965   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.270974   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:46.270979   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:46.271024   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:46.295887   39074 cri.go:89] found id: ""
	I1002 20:20:46.295903   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.295911   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:46.295917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:46.295969   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:46.321705   39074 cri.go:89] found id: ""
	I1002 20:20:46.321721   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.321730   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:46.321736   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:46.321785   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:46.348811   39074 cri.go:89] found id: ""
	I1002 20:20:46.348827   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.348836   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:46.348842   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:46.348900   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:46.373477   39074 cri.go:89] found id: ""
	I1002 20:20:46.373493   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.373502   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:46.373508   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:46.373552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:46.398884   39074 cri.go:89] found id: ""
	I1002 20:20:46.398900   39074 logs.go:282] 0 containers: []
	W1002 20:20:46.398908   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:46.398917   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:46.398926   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:46.463113   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:46.463131   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:46.474566   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:46.474578   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:46.529468   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:46.522633    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.523203    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.524813    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.525199    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.526736    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:46.522633    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.523203    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.524813    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.525199    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:46.526736    9519 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:46.529479   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:46.529489   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:46.590223   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:46.590241   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:49.118745   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:49.128971   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:49.129012   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:49.155632   39074 cri.go:89] found id: ""
	I1002 20:20:49.155662   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.155683   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:49.155689   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:49.155734   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:49.180611   39074 cri.go:89] found id: ""
	I1002 20:20:49.180629   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.180635   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:49.180639   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:49.180703   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:49.206534   39074 cri.go:89] found id: ""
	I1002 20:20:49.206557   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.206563   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:49.206568   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:49.206617   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:49.231608   39074 cri.go:89] found id: ""
	I1002 20:20:49.231625   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.231633   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:49.231641   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:49.231713   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:49.256407   39074 cri.go:89] found id: ""
	I1002 20:20:49.256426   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.256433   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:49.256439   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:49.256490   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:49.281494   39074 cri.go:89] found id: ""
	I1002 20:20:49.281509   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.281517   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:49.281524   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:49.281571   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:49.306502   39074 cri.go:89] found id: ""
	I1002 20:20:49.306518   39074 logs.go:282] 0 containers: []
	W1002 20:20:49.306526   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:49.306534   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:49.306543   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:49.374386   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:49.374408   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:49.385910   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:49.385928   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:49.440525   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:49.433626    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.434180    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.435811    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.436224    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.437741    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:49.433626    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.434180    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.435811    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.436224    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:49.437741    9633 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:49.440537   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:49.440549   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:49.501317   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:49.501334   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:52.031253   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:52.041701   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:52.041754   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:52.066302   39074 cri.go:89] found id: ""
	I1002 20:20:52.066315   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.066321   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:52.066325   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:52.066375   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:52.091575   39074 cri.go:89] found id: ""
	I1002 20:20:52.091591   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.091600   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:52.091606   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:52.091674   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:52.115838   39074 cri.go:89] found id: ""
	I1002 20:20:52.115854   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.115861   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:52.115867   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:52.115914   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:52.141387   39074 cri.go:89] found id: ""
	I1002 20:20:52.141402   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.141412   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:52.141417   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:52.141460   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:52.166810   39074 cri.go:89] found id: ""
	I1002 20:20:52.166823   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.166828   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:52.166832   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:52.166872   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:52.192399   39074 cri.go:89] found id: ""
	I1002 20:20:52.192413   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.192420   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:52.192425   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:52.192473   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:52.217364   39074 cri.go:89] found id: ""
	I1002 20:20:52.217378   39074 logs.go:282] 0 containers: []
	W1002 20:20:52.217385   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:52.217391   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:52.217401   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:52.272135   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:52.265457    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.266093    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.267566    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.268058    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.269531    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:52.265457    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.266093    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.267566    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.268058    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:52.269531    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:52.272144   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:52.272152   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:52.334330   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:52.334352   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:52.364500   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:52.364514   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:52.427683   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:52.427702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:54.939454   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:54.950121   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:54.950174   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:54.975667   39074 cri.go:89] found id: ""
	I1002 20:20:54.975683   39074 logs.go:282] 0 containers: []
	W1002 20:20:54.975692   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:54.975697   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:54.975739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:55.000676   39074 cri.go:89] found id: ""
	I1002 20:20:55.000692   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.000702   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:55.000711   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:55.000772   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:55.025484   39074 cri.go:89] found id: ""
	I1002 20:20:55.025499   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.025509   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:55.025516   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:55.025570   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:55.050548   39074 cri.go:89] found id: ""
	I1002 20:20:55.050562   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.050570   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:55.050576   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:55.050623   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:55.075593   39074 cri.go:89] found id: ""
	I1002 20:20:55.075608   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.075613   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:55.075618   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:55.075683   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:55.100182   39074 cri.go:89] found id: ""
	I1002 20:20:55.100196   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.100202   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:55.100206   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:55.100245   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:55.125869   39074 cri.go:89] found id: ""
	I1002 20:20:55.125883   39074 logs.go:282] 0 containers: []
	W1002 20:20:55.125890   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:55.125898   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:55.125907   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:55.194871   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:55.194894   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:55.206048   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:55.206063   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:55.259703   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:55.253143    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.253642    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255145    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255538    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.257050    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:55.253143    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.253642    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255145    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.255538    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:55.257050    9882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:55.259714   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:55.259723   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:55.319375   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:55.319393   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:20:57.847993   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:20:57.858498   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:20:57.858550   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:20:57.881390   39074 cri.go:89] found id: ""
	I1002 20:20:57.881404   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.881412   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:20:57.881416   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:20:57.881460   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:20:57.905251   39074 cri.go:89] found id: ""
	I1002 20:20:57.905267   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.905274   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:20:57.905279   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:20:57.905318   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:20:57.931213   39074 cri.go:89] found id: ""
	I1002 20:20:57.931226   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.931233   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:20:57.931238   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:20:57.931280   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:20:57.954527   39074 cri.go:89] found id: ""
	I1002 20:20:57.954544   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.954558   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:20:57.954564   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:20:57.954604   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:20:57.978788   39074 cri.go:89] found id: ""
	I1002 20:20:57.978801   39074 logs.go:282] 0 containers: []
	W1002 20:20:57.978807   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:20:57.978811   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:20:57.978861   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:20:58.004052   39074 cri.go:89] found id: ""
	I1002 20:20:58.004067   39074 logs.go:282] 0 containers: []
	W1002 20:20:58.004075   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:20:58.004082   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:20:58.004123   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:20:58.028322   39074 cri.go:89] found id: ""
	I1002 20:20:58.028335   39074 logs.go:282] 0 containers: []
	W1002 20:20:58.028341   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:20:58.028348   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:20:58.028357   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:20:58.094257   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:20:58.094275   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:20:58.105903   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:20:58.105918   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:20:58.160072   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:20:58.153230   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.153795   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155325   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155732   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.157257   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:20:58.153230   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.153795   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155325   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.155732   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:20:58.157257   10004 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:20:58.160081   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:20:58.160090   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:20:58.219413   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:20:58.219430   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:00.748760   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:00.759397   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:00.759452   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:00.783722   39074 cri.go:89] found id: ""
	I1002 20:21:00.783738   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.783747   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:00.783755   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:00.783811   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:00.808536   39074 cri.go:89] found id: ""
	I1002 20:21:00.808552   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.808560   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:00.808565   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:00.808619   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:00.833822   39074 cri.go:89] found id: ""
	I1002 20:21:00.833839   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.833846   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:00.833850   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:00.833893   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:00.857297   39074 cri.go:89] found id: ""
	I1002 20:21:00.857311   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.857317   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:00.857322   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:00.857372   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:00.882563   39074 cri.go:89] found id: ""
	I1002 20:21:00.882578   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.882586   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:00.882592   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:00.882664   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:00.907673   39074 cri.go:89] found id: ""
	I1002 20:21:00.907689   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.907698   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:00.907704   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:00.907746   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:00.932133   39074 cri.go:89] found id: ""
	I1002 20:21:00.932148   39074 logs.go:282] 0 containers: []
	W1002 20:21:00.932156   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:00.932165   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:00.932179   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:01.000177   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:01.000198   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:01.012252   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:01.012267   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:01.068351   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:01.061526   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.062112   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.063638   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.064089   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.065590   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:01.061526   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.062112   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.063638   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.064089   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:01.065590   10130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:01.068361   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:01.068370   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:01.128987   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:01.129007   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:03.659911   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:03.670393   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:03.670439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:03.695784   39074 cri.go:89] found id: ""
	I1002 20:21:03.695796   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.695802   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:03.695806   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:03.695846   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:03.720085   39074 cri.go:89] found id: ""
	I1002 20:21:03.720098   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.720104   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:03.720109   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:03.720150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:03.745925   39074 cri.go:89] found id: ""
	I1002 20:21:03.745940   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.745950   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:03.745958   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:03.745996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:03.770616   39074 cri.go:89] found id: ""
	I1002 20:21:03.770632   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.770639   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:03.770655   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:03.770711   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:03.793953   39074 cri.go:89] found id: ""
	I1002 20:21:03.793969   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.793977   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:03.793982   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:03.794028   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:03.818909   39074 cri.go:89] found id: ""
	I1002 20:21:03.818925   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.818933   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:03.818940   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:03.818996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:03.843200   39074 cri.go:89] found id: ""
	I1002 20:21:03.843213   39074 logs.go:282] 0 containers: []
	W1002 20:21:03.843219   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:03.843228   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:03.843237   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:03.901520   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:03.901537   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:03.929305   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:03.929319   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:03.993117   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:03.993134   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:04.004664   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:04.004678   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:04.058624   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:04.051963   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.052457   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.053947   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.054366   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.055857   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:04.051963   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.052457   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.053947   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.054366   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:04.055857   10263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:06.560322   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:06.570866   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:06.570909   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:06.594524   39074 cri.go:89] found id: ""
	I1002 20:21:06.594536   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.594542   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:06.594547   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:06.594586   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:06.619717   39074 cri.go:89] found id: ""
	I1002 20:21:06.619730   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.619741   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:06.619747   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:06.619787   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:06.643975   39074 cri.go:89] found id: ""
	I1002 20:21:06.643989   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.643994   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:06.643999   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:06.644051   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:06.667642   39074 cri.go:89] found id: ""
	I1002 20:21:06.667674   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.667683   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:06.667690   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:06.667735   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:06.692383   39074 cri.go:89] found id: ""
	I1002 20:21:06.692398   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.692406   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:06.692411   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:06.692459   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:06.716132   39074 cri.go:89] found id: ""
	I1002 20:21:06.716148   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.716157   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:06.716162   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:06.716206   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:06.740781   39074 cri.go:89] found id: ""
	I1002 20:21:06.740794   39074 logs.go:282] 0 containers: []
	W1002 20:21:06.740800   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:06.740809   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:06.740817   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:06.809048   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:06.809064   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:06.820121   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:06.820134   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:06.873477   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:06.866935   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.867506   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869037   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869480   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.870947   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:06.866935   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.867506   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869037   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.869480   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:06.870947   10370 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:06.873489   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:06.873503   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:06.932869   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:06.932885   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:09.461200   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:09.471453   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:09.471494   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:09.495052   39074 cri.go:89] found id: ""
	I1002 20:21:09.495076   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.495083   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:09.495090   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:09.495142   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:09.520680   39074 cri.go:89] found id: ""
	I1002 20:21:09.520694   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.520699   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:09.520704   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:09.520745   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:09.544279   39074 cri.go:89] found id: ""
	I1002 20:21:09.544292   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.544300   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:09.544305   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:09.544343   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:09.568552   39074 cri.go:89] found id: ""
	I1002 20:21:09.568564   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.568570   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:09.568575   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:09.568636   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:09.593483   39074 cri.go:89] found id: ""
	I1002 20:21:09.593496   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.593504   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:09.593509   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:09.593548   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:09.618504   39074 cri.go:89] found id: ""
	I1002 20:21:09.618518   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.618524   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:09.618529   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:09.618568   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:09.644028   39074 cri.go:89] found id: ""
	I1002 20:21:09.644040   39074 logs.go:282] 0 containers: []
	W1002 20:21:09.644046   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:09.644054   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:09.644068   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:09.709968   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:09.709989   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:09.721282   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:09.721295   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:09.774963   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:09.768383   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.768943   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770534   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770976   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.772525   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:09.768383   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.768943   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770534   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.770976   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:09.772525   10496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:09.774974   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:09.774985   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:09.833762   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:09.833780   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:12.362468   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:12.372596   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:12.372637   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:12.398178   39074 cri.go:89] found id: ""
	I1002 20:21:12.398193   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.398202   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:12.398208   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:12.398255   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:12.422734   39074 cri.go:89] found id: ""
	I1002 20:21:12.422751   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.422759   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:12.422764   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:12.422806   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:12.446773   39074 cri.go:89] found id: ""
	I1002 20:21:12.446791   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.446799   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:12.446806   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:12.446847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:12.470795   39074 cri.go:89] found id: ""
	I1002 20:21:12.470808   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.470815   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:12.470819   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:12.470858   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:12.494783   39074 cri.go:89] found id: ""
	I1002 20:21:12.494796   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.494801   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:12.494805   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:12.494845   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:12.518163   39074 cri.go:89] found id: ""
	I1002 20:21:12.518177   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.518182   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:12.518187   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:12.518226   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:12.542626   39074 cri.go:89] found id: ""
	I1002 20:21:12.542638   39074 logs.go:282] 0 containers: []
	W1002 20:21:12.542643   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:12.542663   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:12.542679   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:12.553111   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:12.553122   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:12.607093   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:12.600525   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.601040   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602535   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602952   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.604425   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:12.600525   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.601040   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602535   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.602952   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:12.604425   10620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:12.607103   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:12.607112   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:12.666819   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:12.666837   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:12.694057   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:12.694071   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:15.261212   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:15.271321   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:15.271362   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:15.296775   39074 cri.go:89] found id: ""
	I1002 20:21:15.296788   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.296795   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:15.296799   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:15.296841   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:15.320931   39074 cri.go:89] found id: ""
	I1002 20:21:15.320944   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.320950   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:15.320954   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:15.320996   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:15.344685   39074 cri.go:89] found id: ""
	I1002 20:21:15.344698   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.344704   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:15.344709   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:15.344748   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:15.368513   39074 cri.go:89] found id: ""
	I1002 20:21:15.368527   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.368534   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:15.368538   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:15.368605   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:15.392399   39074 cri.go:89] found id: ""
	I1002 20:21:15.392414   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.392422   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:15.392428   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:15.392486   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:15.416043   39074 cri.go:89] found id: ""
	I1002 20:21:15.416056   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.416062   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:15.416066   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:15.416110   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:15.440250   39074 cri.go:89] found id: ""
	I1002 20:21:15.440263   39074 logs.go:282] 0 containers: []
	W1002 20:21:15.440269   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:15.440276   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:15.440285   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:15.467533   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:15.467548   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:15.533766   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:15.533790   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:15.544835   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:15.544851   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:15.599678   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:15.592798   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.593349   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.594871   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.595307   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.596919   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:15.592798   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.593349   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.594871   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.595307   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:15.596919   10755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:15.599691   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:15.599702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:18.165132   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:18.175676   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:18.175725   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:18.199922   39074 cri.go:89] found id: ""
	I1002 20:21:18.199940   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.199946   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:18.199951   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:18.199992   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:18.223152   39074 cri.go:89] found id: ""
	I1002 20:21:18.223169   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.223177   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:18.223184   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:18.223227   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:18.246742   39074 cri.go:89] found id: ""
	I1002 20:21:18.246757   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.246766   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:18.246772   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:18.246816   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:18.270031   39074 cri.go:89] found id: ""
	I1002 20:21:18.270044   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.270050   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:18.270055   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:18.270106   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:18.294199   39074 cri.go:89] found id: ""
	I1002 20:21:18.294213   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.294220   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:18.294224   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:18.294265   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:18.319955   39074 cri.go:89] found id: ""
	I1002 20:21:18.319968   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.319974   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:18.319979   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:18.320027   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:18.346187   39074 cri.go:89] found id: ""
	I1002 20:21:18.346202   39074 logs.go:282] 0 containers: []
	W1002 20:21:18.346209   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:18.346218   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:18.346230   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:18.412451   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:18.412469   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:18.423898   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:18.423911   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:18.477273   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:18.470574   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.471135   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.472841   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.473326   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.474859   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:18.470574   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.471135   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.472841   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.473326   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:18.474859   10859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:18.477287   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:18.477297   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:18.536355   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:18.536373   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:21.066419   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:21.076563   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:21.076666   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:21.102164   39074 cri.go:89] found id: ""
	I1002 20:21:21.102177   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.102183   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:21.102188   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:21.102232   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:21.129158   39074 cri.go:89] found id: ""
	I1002 20:21:21.129173   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.129182   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:21.129188   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:21.129231   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:21.154477   39074 cri.go:89] found id: ""
	I1002 20:21:21.154492   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.154497   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:21.154502   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:21.154546   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:21.180534   39074 cri.go:89] found id: ""
	I1002 20:21:21.180549   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.180555   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:21.180561   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:21.180620   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:21.206019   39074 cri.go:89] found id: ""
	I1002 20:21:21.206031   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.206038   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:21.206046   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:21.206084   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:21.230114   39074 cri.go:89] found id: ""
	I1002 20:21:21.230127   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.230133   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:21.230138   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:21.230178   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:21.254824   39074 cri.go:89] found id: ""
	I1002 20:21:21.254838   39074 logs.go:282] 0 containers: []
	W1002 20:21:21.254844   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:21.254851   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:21.254860   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:21.317018   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:21.317035   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:21.343844   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:21.343858   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:21.408925   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:21.408944   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:21.419821   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:21.419835   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:21.471978   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:21.465582   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.466082   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467579   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467942   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.469419   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:21.465582   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.466082   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467579   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.467942   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:21.469419   10999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:23.973621   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:23.984622   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:23.984691   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:24.008789   39074 cri.go:89] found id: ""
	I1002 20:21:24.008805   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.008814   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:24.008820   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:24.008867   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:24.034564   39074 cri.go:89] found id: ""
	I1002 20:21:24.034581   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.034596   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:24.034603   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:24.034643   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:24.059176   39074 cri.go:89] found id: ""
	I1002 20:21:24.059189   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.059194   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:24.059199   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:24.059247   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:24.083475   39074 cri.go:89] found id: ""
	I1002 20:21:24.083488   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.083495   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:24.083499   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:24.083550   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:24.108059   39074 cri.go:89] found id: ""
	I1002 20:21:24.108072   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.108078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:24.108083   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:24.108124   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:24.132959   39074 cri.go:89] found id: ""
	I1002 20:21:24.132973   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.132978   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:24.132983   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:24.133023   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:24.157626   39074 cri.go:89] found id: ""
	I1002 20:21:24.157638   39074 logs.go:282] 0 containers: []
	W1002 20:21:24.157644   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:24.157666   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:24.157677   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:24.222240   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:24.222258   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:24.252463   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:24.252477   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:24.322663   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:24.322681   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:24.334105   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:24.334119   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:24.388449   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:24.381839   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.382360   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.383974   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.384421   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.385999   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:24.381839   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.382360   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.383974   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.384421   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:24.385999   11124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:26.890112   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:26.900667   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:26.900710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:26.924781   39074 cri.go:89] found id: ""
	I1002 20:21:26.924794   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.924800   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:26.924805   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:26.924846   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:26.948571   39074 cri.go:89] found id: ""
	I1002 20:21:26.948586   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.948600   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:26.948606   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:26.948661   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:26.972451   39074 cri.go:89] found id: ""
	I1002 20:21:26.972466   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.972472   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:26.972478   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:26.972525   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:26.997499   39074 cri.go:89] found id: ""
	I1002 20:21:26.997512   39074 logs.go:282] 0 containers: []
	W1002 20:21:26.997518   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:26.997523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:26.997572   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:27.022056   39074 cri.go:89] found id: ""
	I1002 20:21:27.022072   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.022078   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:27.022083   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:27.022124   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:27.046069   39074 cri.go:89] found id: ""
	I1002 20:21:27.046083   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.046089   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:27.046095   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:27.046135   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:27.070455   39074 cri.go:89] found id: ""
	I1002 20:21:27.070469   39074 logs.go:282] 0 containers: []
	W1002 20:21:27.070475   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:27.070482   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:27.070493   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:27.139300   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:27.139317   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:27.150073   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:27.150086   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:27.203171   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:27.196472   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.196973   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198530   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198931   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.200409   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:27.196472   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.196973   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198530   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.198931   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:27.200409   11229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:27.203181   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:27.203189   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:27.265474   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:27.265492   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:29.793992   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:29.804235   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:29.804279   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:29.828729   39074 cri.go:89] found id: ""
	I1002 20:21:29.828743   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.828751   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:29.828757   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:29.828809   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:29.853355   39074 cri.go:89] found id: ""
	I1002 20:21:29.853372   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.853382   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:29.853388   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:29.853439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:29.878218   39074 cri.go:89] found id: ""
	I1002 20:21:29.878231   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.878236   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:29.878241   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:29.878281   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:29.903091   39074 cri.go:89] found id: ""
	I1002 20:21:29.903105   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.903114   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:29.903120   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:29.903161   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:29.927692   39074 cri.go:89] found id: ""
	I1002 20:21:29.927710   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.927716   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:29.927720   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:29.927769   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:29.952593   39074 cri.go:89] found id: ""
	I1002 20:21:29.952608   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.952618   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:29.952624   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:29.952693   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:29.977117   39074 cri.go:89] found id: ""
	I1002 20:21:29.977133   39074 logs.go:282] 0 containers: []
	W1002 20:21:29.977140   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:29.977150   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:29.977161   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:30.004687   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:30.004701   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:30.071166   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:30.071188   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:30.082387   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:30.082403   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:30.137131   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:30.130268   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.130846   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132362   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132758   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.134348   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:30.130268   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.130846   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132362   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.132758   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:30.134348   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:30.137140   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:30.137148   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:32.698009   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:32.708134   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:32.708177   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:32.734103   39074 cri.go:89] found id: ""
	I1002 20:21:32.734117   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.734126   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:32.734131   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:32.734179   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:32.758404   39074 cri.go:89] found id: ""
	I1002 20:21:32.758417   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.758423   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:32.758431   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:32.758477   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:32.784135   39074 cri.go:89] found id: ""
	I1002 20:21:32.784150   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.784157   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:32.784161   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:32.784204   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:32.809641   39074 cri.go:89] found id: ""
	I1002 20:21:32.809684   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.809693   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:32.809697   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:32.809739   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:32.833831   39074 cri.go:89] found id: ""
	I1002 20:21:32.833847   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.833856   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:32.833862   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:32.833918   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:32.858510   39074 cri.go:89] found id: ""
	I1002 20:21:32.858523   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.858531   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:32.858537   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:32.858590   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:32.882883   39074 cri.go:89] found id: ""
	I1002 20:21:32.882898   39074 logs.go:282] 0 containers: []
	W1002 20:21:32.882907   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:32.882916   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:32.882928   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:32.951104   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:32.951125   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:32.962042   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:32.962058   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:33.015746   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:33.009215   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.009701   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011251   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011629   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.013187   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:33.009215   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.009701   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011251   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.011629   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:33.013187   11475 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:33.015758   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:33.015772   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:33.074804   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:33.074821   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:35.603185   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:35.613834   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:35.613876   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:35.638330   39074 cri.go:89] found id: ""
	I1002 20:21:35.638342   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.638348   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:35.638353   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:35.638391   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:35.661464   39074 cri.go:89] found id: ""
	I1002 20:21:35.661476   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.661482   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:35.661487   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:35.661529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:35.684962   39074 cri.go:89] found id: ""
	I1002 20:21:35.684977   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.684983   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:35.684987   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:35.685036   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:35.708990   39074 cri.go:89] found id: ""
	I1002 20:21:35.709002   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.709007   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:35.709012   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:35.709054   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:35.732099   39074 cri.go:89] found id: ""
	I1002 20:21:35.732116   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.732125   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:35.732134   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:35.732179   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:35.756437   39074 cri.go:89] found id: ""
	I1002 20:21:35.756450   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.756456   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:35.756461   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:35.756501   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:35.782205   39074 cri.go:89] found id: ""
	I1002 20:21:35.782219   39074 logs.go:282] 0 containers: []
	W1002 20:21:35.782225   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:35.782231   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:35.782240   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:35.849923   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:35.849941   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:35.861090   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:35.861104   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:35.914924   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:35.908496   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.908987   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.910547   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.911018   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.912498   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:35.908496   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.908987   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.910547   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.911018   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:35.912498   11590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:35.914934   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:35.914943   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:35.975011   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:35.975031   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:38.503369   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:38.513583   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:38.513630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:38.538175   39074 cri.go:89] found id: ""
	I1002 20:21:38.538190   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.538197   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:38.538201   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:38.538239   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:38.562421   39074 cri.go:89] found id: ""
	I1002 20:21:38.562434   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.562440   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:38.562444   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:38.562510   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:38.587376   39074 cri.go:89] found id: ""
	I1002 20:21:38.587388   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.587394   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:38.587400   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:38.587439   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:38.611178   39074 cri.go:89] found id: ""
	I1002 20:21:38.611192   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.611198   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:38.611202   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:38.611243   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:38.635805   39074 cri.go:89] found id: ""
	I1002 20:21:38.635817   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.635823   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:38.635827   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:38.635872   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:38.660043   39074 cri.go:89] found id: ""
	I1002 20:21:38.660065   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.660071   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:38.660075   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:38.660115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:38.683490   39074 cri.go:89] found id: ""
	I1002 20:21:38.683502   39074 logs.go:282] 0 containers: []
	W1002 20:21:38.683508   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:38.683515   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:38.683522   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:38.741516   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:38.741534   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:38.769294   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:38.769308   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:38.838736   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:38.838753   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:38.849582   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:38.849612   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:38.903424   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:38.896399   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.896943   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898498   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898964   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.900463   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:38.896399   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.896943   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898498   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.898964   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:38.900463   11742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:41.405089   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:41.415377   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:41.415426   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:41.440687   39074 cri.go:89] found id: ""
	I1002 20:21:41.440700   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.440707   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:41.440712   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:41.440755   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:41.465054   39074 cri.go:89] found id: ""
	I1002 20:21:41.465075   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.465081   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:41.465086   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:41.465140   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:41.489735   39074 cri.go:89] found id: ""
	I1002 20:21:41.489748   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.489754   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:41.489759   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:41.489799   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:41.514723   39074 cri.go:89] found id: ""
	I1002 20:21:41.514735   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.514740   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:41.514745   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:41.514786   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:41.538573   39074 cri.go:89] found id: ""
	I1002 20:21:41.538586   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.538592   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:41.538597   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:41.538669   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:41.563317   39074 cri.go:89] found id: ""
	I1002 20:21:41.563334   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.563343   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:41.563349   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:41.563389   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:41.587493   39074 cri.go:89] found id: ""
	I1002 20:21:41.587509   39074 logs.go:282] 0 containers: []
	W1002 20:21:41.587515   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:41.587522   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:41.587532   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:41.657445   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:41.657473   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:41.668994   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:41.669012   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:41.722898   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:41.715908   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.716372   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718002   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718454   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.720024   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:41.715908   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.716372   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718002   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.718454   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:41.720024   11849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:41.722911   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:41.722919   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:41.780887   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:41.780909   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:44.310936   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:44.322755   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:44.322807   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:44.347939   39074 cri.go:89] found id: ""
	I1002 20:21:44.347951   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.347958   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:44.347962   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:44.348004   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:44.372444   39074 cri.go:89] found id: ""
	I1002 20:21:44.372460   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.372466   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:44.372472   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:44.372514   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:44.397131   39074 cri.go:89] found id: ""
	I1002 20:21:44.397148   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.397157   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:44.397163   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:44.397215   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:44.421209   39074 cri.go:89] found id: ""
	I1002 20:21:44.421222   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.421228   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:44.421232   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:44.421269   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:44.445113   39074 cri.go:89] found id: ""
	I1002 20:21:44.445125   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.445131   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:44.445135   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:44.445178   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:44.469164   39074 cri.go:89] found id: ""
	I1002 20:21:44.469178   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.469185   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:44.469191   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:44.469248   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:44.494058   39074 cri.go:89] found id: ""
	I1002 20:21:44.494070   39074 logs.go:282] 0 containers: []
	W1002 20:21:44.494076   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:44.494083   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:44.494091   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:44.563166   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:44.563185   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:44.574587   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:44.574601   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:44.627643   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:44.620697   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.621137   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.622679   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.623151   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.624644   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:44.620697   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.621137   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.622679   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.623151   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:44.624644   11975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:44.627670   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:44.627681   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:44.688606   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:44.688623   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:47.218714   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:47.229181   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:47.229224   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:47.254586   39074 cri.go:89] found id: ""
	I1002 20:21:47.254600   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.254607   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:47.254611   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:47.254666   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:47.277466   39074 cri.go:89] found id: ""
	I1002 20:21:47.277479   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.277485   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:47.277489   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:47.277529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:47.300741   39074 cri.go:89] found id: ""
	I1002 20:21:47.300754   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.300759   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:47.300764   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:47.300819   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:47.325015   39074 cri.go:89] found id: ""
	I1002 20:21:47.325030   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.325037   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:47.325042   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:47.325086   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:47.349241   39074 cri.go:89] found id: ""
	I1002 20:21:47.349256   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.349264   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:47.349270   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:47.349322   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:47.373778   39074 cri.go:89] found id: ""
	I1002 20:21:47.373790   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.373796   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:47.373801   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:47.373847   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:47.397514   39074 cri.go:89] found id: ""
	I1002 20:21:47.397527   39074 logs.go:282] 0 containers: []
	W1002 20:21:47.397532   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:47.397539   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:47.397550   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:47.452728   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:47.446108   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.446609   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448123   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448540   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.450035   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:47.446108   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.446609   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448123   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.448540   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:47.450035   12095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:47.452738   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:47.452748   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:47.513401   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:47.513419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:47.542325   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:47.542339   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:47.607380   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:47.607397   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:50.119560   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:50.129969   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:50.130031   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:50.154300   39074 cri.go:89] found id: ""
	I1002 20:21:50.154314   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.154322   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:50.154329   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:50.154381   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:50.178814   39074 cri.go:89] found id: ""
	I1002 20:21:50.178831   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.178840   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:50.178846   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:50.178886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:50.202532   39074 cri.go:89] found id: ""
	I1002 20:21:50.202546   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.202553   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:50.202558   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:50.202597   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:50.227602   39074 cri.go:89] found id: ""
	I1002 20:21:50.227620   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.227630   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:50.227636   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:50.227705   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:50.254467   39074 cri.go:89] found id: ""
	I1002 20:21:50.254479   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.254485   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:50.254490   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:50.254534   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:50.279114   39074 cri.go:89] found id: ""
	I1002 20:21:50.279132   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.279141   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:50.279147   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:50.279196   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:50.303673   39074 cri.go:89] found id: ""
	I1002 20:21:50.303689   39074 logs.go:282] 0 containers: []
	W1002 20:21:50.303695   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:50.303703   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:50.303712   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:50.367227   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:50.367244   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:50.394498   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:50.394517   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:50.463556   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:50.463573   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:50.475248   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:50.475266   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:50.530138   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:50.523630   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.524260   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.525840   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.526247   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.527437   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:50.523630   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.524260   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.525840   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.526247   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:50.527437   12240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:53.031819   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:53.042276   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:53.042319   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:53.066835   39074 cri.go:89] found id: ""
	I1002 20:21:53.066850   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.066865   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:53.066872   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:53.066914   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:53.090995   39074 cri.go:89] found id: ""
	I1002 20:21:53.091008   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.091014   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:53.091018   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:53.091057   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:53.116027   39074 cri.go:89] found id: ""
	I1002 20:21:53.116043   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.116051   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:53.116056   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:53.116097   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:53.141627   39074 cri.go:89] found id: ""
	I1002 20:21:53.141640   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.141661   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:53.141668   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:53.141710   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:53.167140   39074 cri.go:89] found id: ""
	I1002 20:21:53.167157   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.167163   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:53.167167   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:53.167210   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:53.190437   39074 cri.go:89] found id: ""
	I1002 20:21:53.190453   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.190459   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:53.190464   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:53.190506   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:53.214513   39074 cri.go:89] found id: ""
	I1002 20:21:53.214527   39074 logs.go:282] 0 containers: []
	W1002 20:21:53.214534   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:53.214541   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:53.214550   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:53.282233   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:53.282249   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:53.293348   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:53.293361   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:53.347988   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:53.341334   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.341823   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343307   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343741   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.345249   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:53.341334   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.341823   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343307   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.343741   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:53.345249   12346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:53.347998   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:53.348008   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:53.407000   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:53.407019   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:55.936592   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:55.946748   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:55.946803   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:55.971330   39074 cri.go:89] found id: ""
	I1002 20:21:55.971347   39074 logs.go:282] 0 containers: []
	W1002 20:21:55.971353   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:55.971358   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:55.971398   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:55.995571   39074 cri.go:89] found id: ""
	I1002 20:21:55.995585   39074 logs.go:282] 0 containers: []
	W1002 20:21:55.995591   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:55.995595   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:55.995635   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:56.020541   39074 cri.go:89] found id: ""
	I1002 20:21:56.020563   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.020573   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:56.020578   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:56.020620   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:56.045458   39074 cri.go:89] found id: ""
	I1002 20:21:56.045474   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.045480   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:56.045485   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:56.045524   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:56.069082   39074 cri.go:89] found id: ""
	I1002 20:21:56.069094   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.069101   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:56.069105   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:56.069150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:56.094402   39074 cri.go:89] found id: ""
	I1002 20:21:56.094417   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.094425   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:56.094430   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:56.094471   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:56.118733   39074 cri.go:89] found id: ""
	I1002 20:21:56.118748   39074 logs.go:282] 0 containers: []
	W1002 20:21:56.118755   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:56.118764   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:56.118776   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:56.186773   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:56.186792   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:21:56.198306   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:56.198321   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:56.253135   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:56.246592   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.247035   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.248560   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.249003   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.250528   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:56.246592   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.247035   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.248560   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.249003   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:56.250528   12466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:56.253144   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:56.253156   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:56.313368   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:56.313384   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:58.841758   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:21:58.852748   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:21:58.852795   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:21:58.878085   39074 cri.go:89] found id: ""
	I1002 20:21:58.878101   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.878109   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:21:58.878115   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:21:58.878169   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:21:58.903034   39074 cri.go:89] found id: ""
	I1002 20:21:58.903047   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.903054   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:21:58.903058   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:21:58.903097   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:21:58.928063   39074 cri.go:89] found id: ""
	I1002 20:21:58.928079   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.928085   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:21:58.928090   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:21:58.928132   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:21:58.953963   39074 cri.go:89] found id: ""
	I1002 20:21:58.953976   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.953982   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:21:58.953987   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:21:58.954039   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:21:58.980346   39074 cri.go:89] found id: ""
	I1002 20:21:58.980363   39074 logs.go:282] 0 containers: []
	W1002 20:21:58.980372   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:21:58.980379   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:21:58.980430   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:21:59.006332   39074 cri.go:89] found id: ""
	I1002 20:21:59.006348   39074 logs.go:282] 0 containers: []
	W1002 20:21:59.006357   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:21:59.006364   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:21:59.006422   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:21:59.030980   39074 cri.go:89] found id: ""
	I1002 20:21:59.030995   39074 logs.go:282] 0 containers: []
	W1002 20:21:59.031004   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:21:59.031013   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:21:59.031026   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:21:59.086481   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:21:59.079666   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.080350   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.081980   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.082417   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.083935   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:21:59.079666   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.080350   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.081980   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.082417   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:21:59.083935   12588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:21:59.086489   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:21:59.086498   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:21:59.150520   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:21:59.150539   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:21:59.178745   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:21:59.178759   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:21:59.248128   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:21:59.248146   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:01.761244   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:01.771733   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:01.771783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:01.796879   39074 cri.go:89] found id: ""
	I1002 20:22:01.796894   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.796903   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:01.796908   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:01.796951   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:01.822376   39074 cri.go:89] found id: ""
	I1002 20:22:01.822389   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.822395   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:01.822400   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:01.822445   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:01.847608   39074 cri.go:89] found id: ""
	I1002 20:22:01.847622   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.847628   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:01.847633   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:01.847701   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:01.872893   39074 cri.go:89] found id: ""
	I1002 20:22:01.872913   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.872919   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:01.872924   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:01.872995   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:01.899179   39074 cri.go:89] found id: ""
	I1002 20:22:01.899197   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.899205   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:01.899210   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:01.899258   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:01.925133   39074 cri.go:89] found id: ""
	I1002 20:22:01.925149   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.925158   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:01.925165   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:01.925209   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:01.951281   39074 cri.go:89] found id: ""
	I1002 20:22:01.951294   39074 logs.go:282] 0 containers: []
	W1002 20:22:01.951300   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:01.951307   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:01.951316   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:02.008670   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:02.001480   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.002372   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004005   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004404   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.005795   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:02.001480   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.002372   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004005   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.004404   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:02.005795   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:02.008684   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:02.008697   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:02.072947   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:02.072969   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:02.102011   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:02.102027   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:02.168431   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:02.168449   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:04.680455   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:04.690926   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:04.690981   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:04.715368   39074 cri.go:89] found id: ""
	I1002 20:22:04.715384   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.715390   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:04.715394   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:04.715438   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:04.739937   39074 cri.go:89] found id: ""
	I1002 20:22:04.739951   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.739956   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:04.739960   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:04.739998   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:04.763534   39074 cri.go:89] found id: ""
	I1002 20:22:04.763546   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.763552   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:04.763556   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:04.763615   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:04.788497   39074 cri.go:89] found id: ""
	I1002 20:22:04.788512   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.788519   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:04.788523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:04.788571   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:04.813000   39074 cri.go:89] found id: ""
	I1002 20:22:04.813012   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.813018   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:04.813022   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:04.813061   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:04.837324   39074 cri.go:89] found id: ""
	I1002 20:22:04.837336   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.837342   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:04.837347   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:04.837387   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:04.863392   39074 cri.go:89] found id: ""
	I1002 20:22:04.863404   39074 logs.go:282] 0 containers: []
	W1002 20:22:04.863410   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:04.863416   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:04.863425   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:04.917001   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:04.910495   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.911044   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912561   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912981   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.914415   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:04.910495   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.911044   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912561   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.912981   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:04.914415   12842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:04.917008   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:04.917017   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:04.980350   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:04.980366   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:05.007566   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:05.007580   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:05.076403   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:05.076419   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:07.589145   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:07.599347   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:07.599390   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:07.623799   39074 cri.go:89] found id: ""
	I1002 20:22:07.623812   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.623818   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:07.623823   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:07.623862   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:07.648210   39074 cri.go:89] found id: ""
	I1002 20:22:07.648222   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.648229   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:07.648233   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:07.648279   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:07.672861   39074 cri.go:89] found id: ""
	I1002 20:22:07.672874   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.672880   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:07.672885   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:07.672933   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:07.696504   39074 cri.go:89] found id: ""
	I1002 20:22:07.696521   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.696530   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:07.696535   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:07.696577   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:07.722324   39074 cri.go:89] found id: ""
	I1002 20:22:07.722340   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.722346   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:07.722351   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:07.722391   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:07.748388   39074 cri.go:89] found id: ""
	I1002 20:22:07.748402   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.748408   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:07.748412   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:07.748449   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:07.773539   39074 cri.go:89] found id: ""
	I1002 20:22:07.773557   39074 logs.go:282] 0 containers: []
	W1002 20:22:07.773564   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:07.773570   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:07.773579   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:07.843853   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:07.843875   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:07.855493   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:07.855511   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:07.909935   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:07.903152   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.903746   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905310   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905743   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.907276   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:07.903152   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.903746   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905310   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.905743   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:07.907276   12969 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:07.909945   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:07.909955   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:07.971055   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:07.971072   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:10.498842   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:10.509052   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:10.509100   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:10.532641   39074 cri.go:89] found id: ""
	I1002 20:22:10.532673   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.532683   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:10.532689   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:10.532737   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:10.555850   39074 cri.go:89] found id: ""
	I1002 20:22:10.555865   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.555872   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:10.555877   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:10.555943   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:10.579608   39074 cri.go:89] found id: ""
	I1002 20:22:10.579623   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.579631   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:10.579636   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:10.579701   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:10.603930   39074 cri.go:89] found id: ""
	I1002 20:22:10.603945   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.603954   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:10.603960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:10.604006   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:10.627050   39074 cri.go:89] found id: ""
	I1002 20:22:10.627063   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.627070   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:10.627074   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:10.627115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:10.650231   39074 cri.go:89] found id: ""
	I1002 20:22:10.650246   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.650254   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:10.650261   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:10.650309   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:10.674381   39074 cri.go:89] found id: ""
	I1002 20:22:10.674396   39074 logs.go:282] 0 containers: []
	W1002 20:22:10.674404   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:10.674413   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:10.674422   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:10.743365   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:10.743388   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:10.754432   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:10.754446   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:10.809037   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:10.802468   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.802995   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804524   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804992   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.806544   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:10.802468   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.802995   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804524   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.804992   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:10.806544   13091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:10.809051   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:10.809061   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:10.866627   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:10.866642   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:13.395270   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:13.405561   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:13.405603   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:13.429063   39074 cri.go:89] found id: ""
	I1002 20:22:13.429076   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.429081   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:13.429086   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:13.429125   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:13.452589   39074 cri.go:89] found id: ""
	I1002 20:22:13.452604   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.452609   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:13.452613   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:13.452669   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:13.476844   39074 cri.go:89] found id: ""
	I1002 20:22:13.476856   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.476862   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:13.476866   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:13.476905   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:13.501936   39074 cri.go:89] found id: ""
	I1002 20:22:13.501948   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.501955   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:13.501960   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:13.502000   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:13.526895   39074 cri.go:89] found id: ""
	I1002 20:22:13.526907   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.526913   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:13.526917   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:13.526968   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:13.550888   39074 cri.go:89] found id: ""
	I1002 20:22:13.550902   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.550910   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:13.550914   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:13.550960   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:13.573769   39074 cri.go:89] found id: ""
	I1002 20:22:13.573784   39074 logs.go:282] 0 containers: []
	W1002 20:22:13.573790   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:13.573796   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:13.573807   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:13.626468   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:13.620002   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.620523   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622171   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622562   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.623979   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:13.620002   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.620523   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622171   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.622562   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:13.623979   13207 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:13.626477   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:13.626485   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:13.685732   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:13.685747   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:13.713954   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:13.713970   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:13.785525   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:13.785541   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:16.298756   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:16.309103   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:16.309143   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:16.335506   39074 cri.go:89] found id: ""
	I1002 20:22:16.335521   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.335529   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:16.335535   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:16.335586   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:16.359417   39074 cri.go:89] found id: ""
	I1002 20:22:16.359431   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.359437   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:16.359442   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:16.359482   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:16.383496   39074 cri.go:89] found id: ""
	I1002 20:22:16.383509   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.383517   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:16.383523   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:16.383578   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:16.409227   39074 cri.go:89] found id: ""
	I1002 20:22:16.409243   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.409250   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:16.409254   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:16.409294   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:16.433847   39074 cri.go:89] found id: ""
	I1002 20:22:16.433861   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.433870   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:16.433876   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:16.433933   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:16.457278   39074 cri.go:89] found id: ""
	I1002 20:22:16.457293   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.457299   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:16.457306   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:16.457345   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:16.482697   39074 cri.go:89] found id: ""
	I1002 20:22:16.482709   39074 logs.go:282] 0 containers: []
	W1002 20:22:16.482715   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:16.482721   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:16.482730   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:16.548732   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:16.548752   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:16.559732   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:16.559747   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:16.612487   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:16.606170   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.606702   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608183   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608579   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.610023   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:16.606170   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.606702   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608183   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.608579   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:16.610023   13340 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:16.612499   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:16.612511   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:16.671684   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:16.671702   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:19.200094   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:19.210479   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:19.210527   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:19.235486   39074 cri.go:89] found id: ""
	I1002 20:22:19.235501   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.235510   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:19.235515   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:19.235560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:19.259294   39074 cri.go:89] found id: ""
	I1002 20:22:19.259305   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.259312   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:19.259316   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:19.259353   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:19.283859   39074 cri.go:89] found id: ""
	I1002 20:22:19.283875   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.283884   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:19.283889   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:19.283941   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:19.307454   39074 cri.go:89] found id: ""
	I1002 20:22:19.307468   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.307473   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:19.307477   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:19.307519   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:19.332321   39074 cri.go:89] found id: ""
	I1002 20:22:19.332334   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.332340   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:19.332345   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:19.332384   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:19.356798   39074 cri.go:89] found id: ""
	I1002 20:22:19.356818   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.356826   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:19.356832   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:19.356886   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:19.382609   39074 cri.go:89] found id: ""
	I1002 20:22:19.382624   39074 logs.go:282] 0 containers: []
	W1002 20:22:19.382632   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:19.382641   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:19.382662   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:19.409876   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:19.409890   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:19.476525   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:19.476540   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:19.487600   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:19.487616   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:19.540532   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:19.533762   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.534339   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.535957   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.536415   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.537924   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:19.533762   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.534339   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.535957   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.536415   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:19.537924   13472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:19.540541   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:19.540552   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:22.106355   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:22.116499   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:22.116552   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:22.142485   39074 cri.go:89] found id: ""
	I1002 20:22:22.142499   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.142507   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:22.142514   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:22.142561   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:22.168287   39074 cri.go:89] found id: ""
	I1002 20:22:22.168301   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.168308   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:22.168312   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:22.168352   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:22.192639   39074 cri.go:89] found id: ""
	I1002 20:22:22.192666   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.192674   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:22.192680   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:22.192726   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:22.217360   39074 cri.go:89] found id: ""
	I1002 20:22:22.217375   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.217383   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:22.217390   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:22.217436   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:22.241729   39074 cri.go:89] found id: ""
	I1002 20:22:22.241744   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.241753   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:22.241759   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:22.241809   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:22.266793   39074 cri.go:89] found id: ""
	I1002 20:22:22.266810   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.266817   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:22.266822   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:22.266866   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:22.289775   39074 cri.go:89] found id: ""
	I1002 20:22:22.289789   39074 logs.go:282] 0 containers: []
	W1002 20:22:22.289794   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:22.289801   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:22.289809   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:22.344340   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:22.337274   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.337797   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339350   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339784   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.341397   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:22.337274   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.337797   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339350   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.339784   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:22.341397   13571 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:22.344350   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:22.344362   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:22.404393   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:22.404410   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:22.432171   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:22.432186   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:22.498216   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:22.498233   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:25.010156   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:25.020516   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:25.020560   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:25.045455   39074 cri.go:89] found id: ""
	I1002 20:22:25.045470   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.045480   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:25.045486   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:25.045529   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:25.070018   39074 cri.go:89] found id: ""
	I1002 20:22:25.070031   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.070037   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:25.070041   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:25.070080   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:25.093191   39074 cri.go:89] found id: ""
	I1002 20:22:25.093204   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.093210   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:25.093214   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:25.093257   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:25.117770   39074 cri.go:89] found id: ""
	I1002 20:22:25.117782   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.117788   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:25.117793   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:25.117834   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:25.141300   39074 cri.go:89] found id: ""
	I1002 20:22:25.141315   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.141325   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:25.141331   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:25.141383   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:25.165980   39074 cri.go:89] found id: ""
	I1002 20:22:25.165993   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.165999   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:25.166003   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:25.166041   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:25.191730   39074 cri.go:89] found id: ""
	I1002 20:22:25.191742   39074 logs.go:282] 0 containers: []
	W1002 20:22:25.191749   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:25.191757   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:25.191766   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:25.259005   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:25.259025   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:25.270639   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:25.270673   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:25.324592   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:25.317379   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.317967   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.319572   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.320064   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.321617   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:25.317379   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.317967   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.319572   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.320064   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:25.321617   13712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:25.324602   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:25.324614   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:25.385501   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:25.385519   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:27.914463   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:27.925227   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:27.925271   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:27.948666   39074 cri.go:89] found id: ""
	I1002 20:22:27.948681   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.948690   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:27.948695   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:27.948735   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:27.972698   39074 cri.go:89] found id: ""
	I1002 20:22:27.972711   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.972716   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:27.972720   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:27.972765   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:27.996954   39074 cri.go:89] found id: ""
	I1002 20:22:27.996970   39074 logs.go:282] 0 containers: []
	W1002 20:22:27.996979   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:27.996984   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:27.997029   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:28.022092   39074 cri.go:89] found id: ""
	I1002 20:22:28.022109   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.022117   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:28.022123   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:28.022164   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:28.047808   39074 cri.go:89] found id: ""
	I1002 20:22:28.047824   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.047831   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:28.047836   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:28.047876   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:28.071793   39074 cri.go:89] found id: ""
	I1002 20:22:28.071807   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.071816   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:28.071822   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:28.071868   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:28.096447   39074 cri.go:89] found id: ""
	I1002 20:22:28.096462   39074 logs.go:282] 0 containers: []
	W1002 20:22:28.096471   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:28.096479   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:28.096489   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:28.107018   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:28.107032   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:28.159925   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:28.153221   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.153766   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155300   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155764   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.157273   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:28.153221   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.153766   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155300   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.155764   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:28.157273   13835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:28.159935   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:28.159945   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:28.219759   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:28.219776   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:28.247325   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:28.247345   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:30.813772   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:30.824079   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:30.824122   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:30.847714   39074 cri.go:89] found id: ""
	I1002 20:22:30.847727   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.847734   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:30.847739   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:30.847783   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:30.870579   39074 cri.go:89] found id: ""
	I1002 20:22:30.870612   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.870619   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:30.870623   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:30.870686   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:30.894513   39074 cri.go:89] found id: ""
	I1002 20:22:30.894528   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.894537   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:30.894542   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:30.894591   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:30.919171   39074 cri.go:89] found id: ""
	I1002 20:22:30.919186   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.919191   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:30.919196   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:30.919236   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:30.943990   39074 cri.go:89] found id: ""
	I1002 20:22:30.944003   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.944009   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:30.944013   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:30.944054   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:30.968147   39074 cri.go:89] found id: ""
	I1002 20:22:30.968162   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.968170   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:30.968178   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:30.968227   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:30.991705   39074 cri.go:89] found id: ""
	I1002 20:22:30.991717   39074 logs.go:282] 0 containers: []
	W1002 20:22:30.991722   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:30.991729   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:30.991740   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:31.046303   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:31.038433   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.039034   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.040929   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.041723   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.043230   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:31.038433   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.039034   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.040929   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.041723   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:31.043230   13952 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:31.046314   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:31.046325   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:31.105380   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:31.105397   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:31.132347   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:31.132363   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:31.202102   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:31.202119   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:33.715172   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:33.725339   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:33.725386   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:33.750520   39074 cri.go:89] found id: ""
	I1002 20:22:33.750534   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.750543   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:33.750549   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:33.750595   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:33.773913   39074 cri.go:89] found id: ""
	I1002 20:22:33.773928   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.773937   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:33.773943   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:33.773991   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:33.797530   39074 cri.go:89] found id: ""
	I1002 20:22:33.797545   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.797554   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:33.797560   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:33.797630   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:33.821852   39074 cri.go:89] found id: ""
	I1002 20:22:33.821871   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.821879   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:33.821885   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:33.821934   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:33.846332   39074 cri.go:89] found id: ""
	I1002 20:22:33.846348   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.846356   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:33.846362   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:33.846400   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:33.870615   39074 cri.go:89] found id: ""
	I1002 20:22:33.870629   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.870639   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:33.870657   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:33.870706   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:33.895226   39074 cri.go:89] found id: ""
	I1002 20:22:33.895241   39074 logs.go:282] 0 containers: []
	W1002 20:22:33.895250   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:33.895266   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:33.895276   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:33.955530   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:33.955547   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:33.983183   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:33.983198   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:34.049224   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:34.049251   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:34.060667   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:34.060686   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:34.114666   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:34.107838   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.108343   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.109897   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.110325   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.111840   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:34.107838   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.108343   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.109897   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.110325   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:34.111840   14097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:36.616388   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:36.626616   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:22:36.626688   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:22:36.652926   39074 cri.go:89] found id: ""
	I1002 20:22:36.652947   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.652957   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:22:36.652965   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:22:36.653011   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:22:36.676048   39074 cri.go:89] found id: ""
	I1002 20:22:36.676060   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.676066   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:22:36.676071   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:22:36.676115   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:22:36.700475   39074 cri.go:89] found id: ""
	I1002 20:22:36.700489   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.700499   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:22:36.700505   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:22:36.700546   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:22:36.724541   39074 cri.go:89] found id: ""
	I1002 20:22:36.724559   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.724567   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:22:36.724576   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:22:36.724623   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:22:36.748967   39074 cri.go:89] found id: ""
	I1002 20:22:36.748982   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.748991   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:22:36.748997   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:22:36.749043   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:22:36.773168   39074 cri.go:89] found id: ""
	I1002 20:22:36.773183   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.773191   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:22:36.773197   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:22:36.773249   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:22:36.796981   39074 cri.go:89] found id: ""
	I1002 20:22:36.796997   39074 logs.go:282] 0 containers: []
	W1002 20:22:36.797003   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:22:36.797011   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:22:36.797023   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:22:36.867000   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:22:36.867018   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:22:36.878017   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:22:36.878031   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:22:36.931114   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:22:36.924389   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.924871   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926348   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926766   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.928319   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:22:36.924389   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.924871   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926348   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.926766   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:22:36.928319   14202 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:22:36.931129   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:22:36.931137   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:22:36.993849   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:22:36.993868   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:22:39.524626   39074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:22:39.535502   39074 kubeadm.go:601] duration metric: took 4m1.714069333s to restartPrimaryControlPlane
	W1002 20:22:39.535572   39074 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 20:22:39.535638   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:22:39.981011   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:22:39.993244   39074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:22:40.001158   39074 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:22:40.001211   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:22:40.008736   39074 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:22:40.008749   39074 kubeadm.go:157] found existing configuration files:
	
	I1002 20:22:40.008782   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:22:40.015964   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:22:40.016000   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:22:40.022839   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:22:40.030026   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:22:40.030064   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:22:40.036752   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:22:40.043720   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:22:40.043755   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:22:40.050532   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:22:40.057416   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:22:40.057453   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:22:40.063936   39074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:22:40.116427   39074 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:22:40.171173   39074 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:26:42.624936   39074 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:26:42.625021   39074 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:26:42.627908   39074 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:26:42.627954   39074 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:26:42.628043   39074 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:26:42.628106   39074 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:26:42.628137   39074 kubeadm.go:318] OS: Linux
	I1002 20:26:42.628173   39074 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:26:42.628211   39074 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:26:42.628278   39074 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:26:42.628331   39074 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:26:42.628370   39074 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:26:42.628412   39074 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:26:42.628451   39074 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:26:42.628487   39074 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:26:42.628556   39074 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:26:42.628674   39074 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:26:42.628787   39074 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:26:42.628860   39074 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:26:42.630666   39074 out.go:252]   - Generating certificates and keys ...
	I1002 20:26:42.630736   39074 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:26:42.630813   39074 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:26:42.630900   39074 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:26:42.630973   39074 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:26:42.631035   39074 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:26:42.631078   39074 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:26:42.631142   39074 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:26:42.631194   39074 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:26:42.631256   39074 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:26:42.631324   39074 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:26:42.631354   39074 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:26:42.631399   39074 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:26:42.631441   39074 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:26:42.631487   39074 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:26:42.631529   39074 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:26:42.631595   39074 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:26:42.631671   39074 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:26:42.631741   39074 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:26:42.631812   39074 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:26:42.633616   39074 out.go:252]   - Booting up control plane ...
	I1002 20:26:42.633716   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:26:42.633796   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:26:42.633850   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:26:42.633948   39074 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:26:42.634026   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:26:42.634114   39074 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:26:42.634190   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:26:42.634222   39074 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:26:42.634348   39074 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:26:42.634448   39074 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:26:42.634515   39074 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000852315s
	I1002 20:26:42.634627   39074 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:26:42.634725   39074 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:26:42.634809   39074 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:26:42.634907   39074 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:26:42.635026   39074 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000487211s
	I1002 20:26:42.635115   39074 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000480662s
	I1002 20:26:42.635180   39074 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000745923s
	I1002 20:26:42.635185   39074 kubeadm.go:318] 
	I1002 20:26:42.635259   39074 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:26:42.635324   39074 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:26:42.635395   39074 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:26:42.635478   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:26:42.635541   39074 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:26:42.635608   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:26:42.635644   39074 kubeadm.go:318] 
	W1002 20:26:42.635735   39074 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000852315s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000487211s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000480662s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000745923s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:26:42.635812   39074 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:26:43.072992   39074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:26:43.084946   39074 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:26:43.084987   39074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:26:43.092545   39074 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:26:43.092552   39074 kubeadm.go:157] found existing configuration files:
	
	I1002 20:26:43.092583   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:26:43.099679   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:26:43.099725   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:26:43.106411   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:26:43.113271   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:26:43.113302   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:26:43.120089   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:26:43.126923   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:26:43.126953   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:26:43.133686   39074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:26:43.140427   39074 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:26:43.140454   39074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:26:43.147131   39074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:26:43.180956   39074 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:26:43.181017   39074 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:26:43.199951   39074 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:26:43.200009   39074 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:26:43.200037   39074 kubeadm.go:318] OS: Linux
	I1002 20:26:43.200076   39074 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:26:43.200114   39074 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:26:43.200153   39074 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:26:43.200196   39074 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:26:43.200234   39074 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:26:43.200272   39074 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:26:43.200315   39074 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:26:43.200350   39074 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:26:43.254197   39074 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:26:43.254330   39074 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:26:43.254435   39074 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:26:43.260331   39074 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:26:43.264543   39074 out.go:252]   - Generating certificates and keys ...
	I1002 20:26:43.264610   39074 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:26:43.264706   39074 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:26:43.264789   39074 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:26:43.264843   39074 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:26:43.264905   39074 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:26:43.264949   39074 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:26:43.265012   39074 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:26:43.265062   39074 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:26:43.265129   39074 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:26:43.265188   39074 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:26:43.265219   39074 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:26:43.265265   39074 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:26:43.505091   39074 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:26:43.932140   39074 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:26:44.064643   39074 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:26:44.173218   39074 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:26:44.534380   39074 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:26:44.534804   39074 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:26:44.538135   39074 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:26:44.539757   39074 out.go:252]   - Booting up control plane ...
	I1002 20:26:44.539881   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:26:44.539950   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:26:44.540002   39074 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:26:44.553179   39074 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:26:44.553329   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:26:44.559491   39074 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:26:44.559770   39074 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:26:44.559808   39074 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:26:44.659881   39074 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:26:44.660026   39074 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:26:45.660495   39074 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000782032s
	I1002 20:26:45.664397   39074 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:26:45.664522   39074 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:26:45.664595   39074 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:26:45.664676   39074 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:30:45.665391   39074 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	I1002 20:30:45.665506   39074 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	I1002 20:30:45.665618   39074 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	I1002 20:30:45.665634   39074 kubeadm.go:318] 
	I1002 20:30:45.665788   39074 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:30:45.665904   39074 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:30:45.665995   39074 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:30:45.666081   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:30:45.666142   39074 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:30:45.666213   39074 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:30:45.666216   39074 kubeadm.go:318] 
	I1002 20:30:45.669103   39074 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:30:45.669219   39074 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:30:45.669740   39074 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:30:45.669792   39074 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:30:45.669843   39074 kubeadm.go:402] duration metric: took 12m7.882478982s to StartCluster
	I1002 20:30:45.669874   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:30:45.669917   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:30:45.695577   39074 cri.go:89] found id: ""
	I1002 20:30:45.695596   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.695603   39074 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:30:45.695610   39074 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:30:45.695674   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:30:45.719440   39074 cri.go:89] found id: ""
	I1002 20:30:45.719456   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.719464   39074 logs.go:284] No container was found matching "etcd"
	I1002 20:30:45.719469   39074 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:30:45.719511   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:30:45.743166   39074 cri.go:89] found id: ""
	I1002 20:30:45.743181   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.743190   39074 logs.go:284] No container was found matching "coredns"
	I1002 20:30:45.743195   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:30:45.743238   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:30:45.767934   39074 cri.go:89] found id: ""
	I1002 20:30:45.767959   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.767967   39074 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:30:45.767974   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:30:45.768019   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:30:45.792091   39074 cri.go:89] found id: ""
	I1002 20:30:45.792102   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.792108   39074 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:30:45.792112   39074 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:30:45.792150   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:30:45.815448   39074 cri.go:89] found id: ""
	I1002 20:30:45.815463   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.815469   39074 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:30:45.815475   39074 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:30:45.815518   39074 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:30:45.840287   39074 cri.go:89] found id: ""
	I1002 20:30:45.840299   39074 logs.go:282] 0 containers: []
	W1002 20:30:45.840305   39074 logs.go:284] No container was found matching "kindnet"
	I1002 20:30:45.840312   39074 logs.go:123] Gathering logs for container status ...
	I1002 20:30:45.840321   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:30:45.868158   39074 logs.go:123] Gathering logs for kubelet ...
	I1002 20:30:45.868172   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:30:45.936734   39074 logs.go:123] Gathering logs for dmesg ...
	I1002 20:30:45.936752   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:30:45.948158   39074 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:30:45.948175   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:30:46.002360   39074 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:30:45.995517   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.996138   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.997668   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.998069   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.999573   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:30:45.995517   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.996138   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.997668   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.998069   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:45.999573   15557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:30:46.002381   39074 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:30:46.002392   39074 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1002 20:30:46.065214   39074 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:30:46.065257   39074 out.go:285] * 
	W1002 20:30:46.065383   39074 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:30:46.065406   39074 out.go:285] * 
	W1002 20:30:46.067075   39074 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:30:46.070473   39074 out.go:203] 
	W1002 20:30:46.071639   39074 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000782032s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000736768s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000901114s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001071178s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:30:46.071666   39074 out.go:285] * 
	I1002 20:30:46.072909   39074 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.551126227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.551499586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.565235643Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.566508741Z" level=info msg="createCtr: deleting container ID 21140f6a02459c6df36ecf818ef775e79b3462280bfd0d1f16f28e3316920953 from idIndex" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.566538929Z" level=info msg="createCtr: removing container 21140f6a02459c6df36ecf818ef775e79b3462280bfd0d1f16f28e3316920953" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.566565962Z" level=info msg="createCtr: deleting container 21140f6a02459c6df36ecf818ef775e79b3462280bfd0d1f16f28e3316920953 from storage" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:42 functional-753218 crio[5814]: time="2025-10-02T20:30:42.568315977Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_802f0aebed1bb3dd62306b1d2076fd94_0" id=c5dff043-11a1-4b1d-b981-42c744daa8e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.450455094Z" level=info msg="Checking image status: kicbase/echo-server:functional-753218" id=c84879bd-092b-4d01-abe7-b2acfa8ed4e5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.483285859Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753218" id=aae67a42-d58b-4787-a09a-d4bb84087023 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.483430896Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753218 not found" id=aae67a42-d58b-4787-a09a-d4bb84087023 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.483476576Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753218 found" id=aae67a42-d58b-4787-a09a-d4bb84087023 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.515698431Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753218" id=536fee8a-5230-478c-b564-1d673c61249a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.515913544Z" level=info msg="Image localhost/kicbase/echo-server:functional-753218 not found" id=536fee8a-5230-478c-b564-1d673c61249a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.515997087Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753218 found" id=536fee8a-5230-478c-b564-1d673c61249a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.546515058Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=bdab93a9-7587-4cb4-9227-aecdd5658b08 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.547813566Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=36a34871-a550-4824-a392-ff13aefa7960 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.549345767Z" level=info msg="Creating container: kube-system/etcd-functional-753218/etcd" id=0ab23018-33e2-4a45-ad2d-b4d6c1d468c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.549700934Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.554361976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.55496359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.571900459Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0ab23018-33e2-4a45-ad2d-b4d6c1d468c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.574718182Z" level=info msg="createCtr: deleting container ID d850311462a76f85b978b6d1d6b91d0eccb828aba6f7bf7a7d7f28ffee93877b from idIndex" id=0ab23018-33e2-4a45-ad2d-b4d6c1d468c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.574779124Z" level=info msg="createCtr: removing container d850311462a76f85b978b6d1d6b91d0eccb828aba6f7bf7a7d7f28ffee93877b" id=0ab23018-33e2-4a45-ad2d-b4d6c1d468c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.574827895Z" level=info msg="createCtr: deleting container d850311462a76f85b978b6d1d6b91d0eccb828aba6f7bf7a7d7f28ffee93877b from storage" id=0ab23018-33e2-4a45-ad2d-b4d6c1d468c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:52 functional-753218 crio[5814]: time="2025-10-02T20:30:52.577730985Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753218_kube-system_91f2b96cb3e3f380ded17f30e8d873bd_0" id=0ab23018-33e2-4a45-ad2d-b4d6c1d468c0 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:30:53.236306   16451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:53.236823   16451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:53.238425   16451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:53.239108   16451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:30:53.240238   16451 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:30:53 up  1:13,  0 user,  load average: 0.40, 0.13, 0.09
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.168537   14925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: I1002 20:30:42.321168   14925 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.321508   14925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.545784   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.568537   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:42 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:42 functional-753218 kubelet[14925]:  > podSandboxID="7a2fde0baea214f3eb0043d508edd186efa5f3f087d902573e164eb4765f9b5b"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.568614   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:42 functional-753218 kubelet[14925]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(802f0aebed1bb3dd62306b1d2076fd94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:42 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:42 functional-753218 kubelet[14925]: E1002 20:30:42.568640   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="802f0aebed1bb3dd62306b1d2076fd94"
	Oct 02 20:30:45 functional-753218 kubelet[14925]: E1002 20:30:45.563684   14925 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753218\" not found"
	Oct 02 20:30:46 functional-753218 kubelet[14925]: E1002 20:30:46.169281   14925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac673e6f5d5d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753218 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,LastTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753218,}"
	Oct 02 20:30:49 functional-753218 kubelet[14925]: E1002 20:30:49.169505   14925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:30:49 functional-753218 kubelet[14925]: I1002 20:30:49.323768   14925 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:30:49 functional-753218 kubelet[14925]: E1002 20:30:49.324202   14925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:30:52 functional-753218 kubelet[14925]: E1002 20:30:52.545972   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:30:52 functional-753218 kubelet[14925]: E1002 20:30:52.578138   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:52 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:52 functional-753218 kubelet[14925]:  > podSandboxID="938004d98ea751eb2eeff411184915e21872d6d9720257a5999ef0864a9cbb1c"
	Oct 02 20:30:52 functional-753218 kubelet[14925]: E1002 20:30:52.578282   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:52 functional-753218 kubelet[14925]:         container etcd start failed in pod etcd-functional-753218_kube-system(91f2b96cb3e3f380ded17f30e8d873bd): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:52 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:52 functional-753218 kubelet[14925]: E1002 20:30:52.578326   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753218" podUID="91f2b96cb3e3f380ded17f30e8d873bd"
	Oct 02 20:30:53 functional-753218 kubelet[14925]: E1002 20:30:53.301045   14925 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-753218&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (394.0657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-753218 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-753218 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (51.028779ms)

                                                
                                                
** stderr ** 
	E1002 20:31:02.120981   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.121237   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122489   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122765   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.124217   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-753218 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1002 20:31:02.120981   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.121237   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122489   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122765   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.124217   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1002 20:31:02.120981   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.121237   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122489   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122765   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.124217   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1002 20:31:02.120981   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.121237   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122489   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122765   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.124217   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1002 20:31:02.120981   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.121237   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122489   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122765   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.124217   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1002 20:31:02.120981   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.121237   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122489   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.122765   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:31:02.124217   60829 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753218
helpers_test.go:243: (dbg) docker inspect functional-753218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	        "Created": "2025-10-02T20:04:01.746609873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 27425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:04:01.779241985Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hostname",
	        "HostsPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/hosts",
	        "LogPath": "/var/lib/docker/containers/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2/13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2-json.log",
	        "Name": "/functional-753218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "13014a14c3fc7770cadfda5fd7379059ed74bf930a805fc20a502770498ce9e2",
	                "LowerDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9195ca595c71a1f034347fdc960fb35dc112356a66eab0ddb31818b3427da08a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753218",
	                "Source": "/var/lib/docker/volumes/functional-753218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753218",
	                "name.minikube.sigs.k8s.io": "functional-753218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "004081b2f3377fd186346a9b01eb1a4bc9a3def8255492ba6d60a3d97d3ae02f",
	            "SandboxKey": "/var/run/docker/netns/004081b2f337",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:da:57:41:87:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a9808f557430f8bb8f634f8b99193d9e4883c7bab84d388ec04ce3ac0291ef6",
	                    "EndpointID": "2b07b33f476d99b70eed072dbaf735ba0ad3a85f48f17fd63921a2bfbfd2db45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753218",
	                        "13014a14c3fc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753218 -n functional-753218: exit status 2 (294.5183ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs -n 25
helpers_test.go:260: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ image   │ functional-753218 image save --daemon kicbase/echo-server:functional-753218 --alsologtostderr                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh sudo umount -f /mount-9p                                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount   │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdspecific-port2212612994/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh -- ls -la /mount-9p                                                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh sudo umount -f /mount-9p                                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service │ functional-753218 service list                                                                                                    │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount   │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount2 --alsologtostderr -v=1                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount   │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount1 --alsologtostderr -v=1                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ mount   │ -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount3 --alsologtostderr -v=1                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh findmnt -T /mount1                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service │ functional-753218 service list -o json                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service │ functional-753218 service --namespace=default --https --url hello-node                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service │ functional-753218 service hello-node --url --format={{.IP}}                                                                       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh findmnt -T /mount1                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ service │ functional-753218 service hello-node --url                                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh     │ functional-753218 ssh findmnt -T /mount2                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh     │ functional-753218 ssh findmnt -T /mount3                                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ mount   │ -p functional-753218 --kill=true                                                                                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start   │ -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start   │ -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start   │ -p functional-753218 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                   │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:31:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:31:01.900418   60624 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:31:01.900625   60624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.900633   60624 out.go:374] Setting ErrFile to fd 2...
	I1002 20:31:01.900637   60624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.900837   60624 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:31:01.901233   60624 out.go:368] Setting JSON to false
	I1002 20:31:01.902055   60624 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4411,"bootTime":1759432651,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:31:01.902136   60624 start.go:140] virtualization: kvm guest
	I1002 20:31:01.904282   60624 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:31:01.905775   60624 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:31:01.905831   60624 notify.go:221] Checking for updates...
	I1002 20:31:01.908487   60624 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:31:01.909539   60624 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:31:01.910782   60624 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:31:01.912067   60624 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:31:01.913370   60624 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:31:01.915249   60624 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:31:01.915917   60624 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:31:01.940532   60624 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:31:01.940722   60624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:31:01.999857   60624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:01.988739527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:31:01.999965   60624 docker.go:319] overlay module found
	I1002 20:31:02.003791   60624 out.go:179] * Using the docker driver based on existing profile
	I1002 20:31:02.005402   60624 start.go:306] selected driver: docker
	I1002 20:31:02.005424   60624 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:31:02.005528   60624 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:31:02.005622   60624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:31:02.065972   60624 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:02.054061844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:31:02.066877   60624 cni.go:84] Creating CNI manager for ""
	I1002 20:31:02.066944   60624 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:31:02.066994   60624 start.go:350] cluster config:
	{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:31:02.069107   60624 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.697507819Z" level=info msg="Checking image status: kicbase/echo-server:functional-753218" id=fa393231-a5fd-49e9-8950-3e6bf6e4053d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720007372Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753218" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720140274Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753218 not found" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.720190361Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753218 found" id=7436e45b-1a60-4fc4-a22f-69e3e32eab53 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742733677Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753218" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742868717Z" level=info msg="Image localhost/kicbase/echo-server:functional-753218 not found" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:54 functional-753218 crio[5814]: time="2025-10-02T20:30:54.742909978Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753218 found" id=be98d603-767a-459b-b0a0-9fdab10c2304 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.459772794Z" level=info msg="Checking image status: kicbase/echo-server:functional-753218" id=c8f7a097-87b5-4be9-96a8-83c5b0aea5dd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483212464Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753218" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483336385Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753218 not found" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.483365009Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753218 found" id=344a7416-e7c8-49d6-a170-a397aaa725f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508218789Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753218" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508368222Z" level=info msg="Image localhost/kicbase/echo-server:functional-753218 not found" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.508409995Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753218 found" id=bf0e1532-86b0-435c-95d0-10b8695ecf60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.546136327Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b91303cc-8916-495e-ab50-b39ca6a3e470 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.547120349Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f14b81fb-d2e6-4ab2-80c7-0d6ecf807ca9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.548289765Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-753218/kube-apiserver" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.548564978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.553541497Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.554186326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.568588089Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570341207Z" level=info msg="createCtr: deleting container ID 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c from idIndex" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570379579Z" level=info msg="createCtr: removing container 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.570421105Z" level=info msg="createCtr: deleting container 19fc64788ec4955e7094db6406e309c1f152c13fb679a12bafd3867e5a5ba39c from storage" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:30:55 functional-753218 crio[5814]: time="2025-10-02T20:30:55.573125941Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753218_kube-system_802f0aebed1bb3dd62306b1d2076fd94_0" id=57fcf50a-eb12-417e-abe0-667f12fe799b name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:31:02.983400   17728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:02.984010   17728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:02.985174   17728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:02.985667   17728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:31:02.987230   17728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:31:03 up  1:13,  0 user,  load average: 0.88, 0.24, 0.13
	Linux functional-753218 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:30:53 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:53 functional-753218 kubelet[14925]: E1002 20:30:53.583334   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753218" podUID="b932b0024653c86a7ea85a2a83a943a4"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.545043   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.566502   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:54 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:54 functional-753218 kubelet[14925]:  > podSandboxID="6ae6de7d398fa442f7f140a6767c4de14fdad57319542a7b5e3df53c8ac49d18"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.566605   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:54 functional-753218 kubelet[14925]:         container kube-scheduler start failed in pod kube-scheduler-functional-753218_kube-system(b25a71e49a335bbe853872de1b1e3093): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:54 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:54 functional-753218 kubelet[14925]: E1002 20:30:54.566641   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753218" podUID="b25a71e49a335bbe853872de1b1e3093"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.545737   14925 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753218\" not found" node="functional-753218"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.564007   14925 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753218\" not found"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.573357   14925 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:30:55 functional-753218 kubelet[14925]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:55 functional-753218 kubelet[14925]:  > podSandboxID="7a2fde0baea214f3eb0043d508edd186efa5f3f087d902573e164eb4765f9b5b"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.573464   14925 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:30:55 functional-753218 kubelet[14925]:         container kube-apiserver start failed in pod kube-apiserver-functional-753218_kube-system(802f0aebed1bb3dd62306b1d2076fd94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:30:55 functional-753218 kubelet[14925]:  > logger="UnhandledError"
	Oct 02 20:30:55 functional-753218 kubelet[14925]: E1002 20:30:55.573515   14925 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753218" podUID="802f0aebed1bb3dd62306b1d2076fd94"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.170861   14925 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753218?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.170842   14925 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753218.186ac673e6f5d5d2  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753218,UID:functional-753218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753218 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753218,},FirstTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,LastTimestamp:2025-10-02 20:26:45.540009426 +0000 UTC m=+0.879461117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753218,}"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: I1002 20:30:56.325790   14925 kubelet_node_status.go:75] "Attempting to register node" node="functional-753218"
	Oct 02 20:30:56 functional-753218 kubelet[14925]: E1002 20:30:56.326143   14925 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753218"
	Oct 02 20:30:58 functional-753218 kubelet[14925]: E1002 20:30:58.463616   14925 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:31:00 functional-753218 kubelet[14925]: E1002 20:31:00.518140   14925 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753218 -n functional-753218: exit status 2 (309.758002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753218" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image load --daemon kicbase/echo-server:functional-753218 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-753218" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image load --daemon kicbase/echo-server:functional-753218 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-753218" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-753218 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-753218 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1002 20:30:53.841625   55519 out.go:360] Setting OutFile to fd 1 ...
I1002 20:30:53.841932   55519 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:53.841939   55519 out.go:374] Setting ErrFile to fd 2...
I1002 20:30:53.841945   55519 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:53.842226   55519 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
I1002 20:30:53.842569   55519 mustload.go:65] Loading cluster: functional-753218
I1002 20:30:53.843138   55519 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:53.846607   55519 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
I1002 20:30:53.871894   55519 host.go:66] Checking if "functional-753218" exists ...
I1002 20:30:53.872286   55519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:30:53.960186   55519 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:false NGoroutines:61 SystemTime:2025-10-02 20:30:53.948246476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 20:30:53.960327   55519 api_server.go:166] Checking apiserver status ...
I1002 20:30:53.960376   55519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 20:30:53.960419   55519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
I1002 20:30:53.986374   55519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
W1002 20:30:54.101532   55519 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1002 20:30:54.105276   55519 out.go:179] * The control-plane node functional-753218 apiserver is not running: (state=Stopped)
I1002 20:30:54.107067   55519 out.go:179]   To start a cluster, run: "minikube start -p functional-753218"

                                                
                                                
stdout: * The control-plane node functional-753218 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-753218"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-753218 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-753218 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-753218 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-753218 tunnel --alsologtostderr] ...
helpers_test.go:519: unable to terminate pid 55518: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-753218 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-753218 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-753218
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image load --daemon kicbase/echo-server:functional-753218 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-753218" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-753218 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-753218 apply -f testdata/testsvc.yaml: exit status 1 (66.110486ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-753218 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdany-port549600056/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759437054175377768" to /tmp/TestFunctionalparallelMountCmdany-port549600056/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759437054175377768" to /tmp/TestFunctionalparallelMountCmdany-port549600056/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759437054175377768" to /tmp/TestFunctionalparallelMountCmdany-port549600056/001/test-1759437054175377768
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (300.749601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:30:54.476549   12851 retry.go:31] will retry after 495.659339ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 20:30 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 20:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 20:30 test-1759437054175377768
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh cat /mount-9p/test-1759437054175377768
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-753218 replace --force -f testdata/busybox-mount-test.yaml
I1002 20:30:55.806681   12851 retry.go:31] will retry after 4.924483032s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-753218 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (48.245666ms)

                                                
                                                
** stderr ** 
	E1002 20:30:55.813356   56875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-753218 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (261.121207ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=41325)
	total 2
	-rw-r--r-- 1 docker docker 24 Oct  2 20:30 created-by-test
	-rw-r--r-- 1 docker docker 24 Oct  2 20:30 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Oct  2 20:30 test-1759437054175377768
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-753218 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdany-port549600056/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdany-port549600056/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port549600056/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:41325
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port549600056/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdany-port549600056/001:/mount-9p --alsologtostderr -v=1] stderr:
I1002 20:30:54.231525   55926 out.go:360] Setting OutFile to fd 1 ...
I1002 20:30:54.231702   55926 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:54.231714   55926 out.go:374] Setting ErrFile to fd 2...
I1002 20:30:54.231721   55926 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:30:54.232034   55926 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
I1002 20:30:54.232374   55926 mustload.go:65] Loading cluster: functional-753218
I1002 20:30:54.232887   55926 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:30:54.233442   55926 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
I1002 20:30:54.255274   55926 host.go:66] Checking if "functional-753218" exists ...
I1002 20:30:54.255675   55926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 20:30:54.327920   55926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-02 20:30:54.31728838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 20:30:54.328127   55926 cli_runner.go:164] Run: docker network inspect functional-753218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 20:30:54.352215   55926 out.go:179] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port549600056/001 into VM as /mount-9p ...
I1002 20:30:54.353677   55926 out.go:179]   - Mount type:   9p
I1002 20:30:54.354730   55926 out.go:179]   - User ID:      docker
I1002 20:30:54.355768   55926 out.go:179]   - Group ID:     docker
I1002 20:30:54.356865   55926 out.go:179]   - Version:      9p2000.L
I1002 20:30:54.357876   55926 out.go:179]   - Message Size: 262144
I1002 20:30:54.358800   55926 out.go:179]   - Options:      map[]
I1002 20:30:54.359749   55926 out.go:179]   - Bind Address: 192.168.49.1:41325
I1002 20:30:54.360789   55926 out.go:179] * Userspace file server: 
I1002 20:30:54.360983   55926 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1002 20:30:54.361071   55926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
I1002 20:30:54.382195   55926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
I1002 20:30:54.487158   55926 mount.go:180] unmount for /mount-9p ran successfully
I1002 20:30:54.487199   55926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1002 20:30:54.495582   55926 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=41325,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1002 20:30:54.539208   55926 main.go:125] stdlog: ufs.go:141 connected
I1002 20:30:54.539385   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tversion tag 65535 msize 262144 version '9P2000.L'
I1002 20:30:54.539452   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rversion tag 65535 msize 262144 version '9P2000'
I1002 20:30:54.539681   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1002 20:30:54.539752   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rattach tag 0 aqid (20fa075 a69ee0de 'd')
I1002 20:30:54.539979   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 0
I1002 20:30:54.540106   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa075 a69ee0de 'd') m d775 at 0 mt 1759437054 l 4096 t 0 d 0 ext )
I1002 20:30:54.541512   55926 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/.mount-process: {Name:mke9e79fec20d41099cb21502b1ba926aa28d6a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:30:54.541733   55926 mount.go:105] mount successful: ""
I1002 20:30:54.543640   55926 out.go:179] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port549600056/001 to /mount-9p
I1002 20:30:54.544957   55926 out.go:203] 
I1002 20:30:54.545881   55926 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1002 20:30:55.488314   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 0
I1002 20:30:55.488428   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa075 a69ee0de 'd') m d775 at 0 mt 1759437054 l 4096 t 0 d 0 ext )
I1002 20:30:55.488778   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 0 newfid 1 
I1002 20:30:55.488839   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rwalk tag 0 
I1002 20:30:55.489055   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Topen tag 0 fid 1 mode 0
I1002 20:30:55.489116   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Ropen tag 0 qid (20fa075 a69ee0de 'd') iounit 0
I1002 20:30:55.489241   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 0
I1002 20:30:55.489345   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa075 a69ee0de 'd') m d775 at 0 mt 1759437054 l 4096 t 0 d 0 ext )
I1002 20:30:55.489573   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tread tag 0 fid 1 offset 0 count 262120
I1002 20:30:55.489812   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rread tag 0 count 258
I1002 20:30:55.489978   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tread tag 0 fid 1 offset 258 count 261862
I1002 20:30:55.490018   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rread tag 0 count 0
I1002 20:30:55.490155   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tread tag 0 fid 1 offset 258 count 262120
I1002 20:30:55.490204   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rread tag 0 count 0
I1002 20:30:55.490339   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 0 newfid 2 0:'test-1759437054175377768' 
I1002 20:30:55.490387   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rwalk tag 0 (20fa078 a69ee0de '') 
I1002 20:30:55.490494   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:55.490628   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('test-1759437054175377768' 'jenkins' 'balintp' '' q (20fa078 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:55.490780   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:55.490878   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('test-1759437054175377768' 'jenkins' 'balintp' '' q (20fa078 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:55.491061   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 2
I1002 20:30:55.491091   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:55.491214   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1002 20:30:55.491252   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rwalk tag 0 (20fa077 a69ee0de '') 
I1002 20:30:55.491390   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:55.491472   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa077 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:55.491609   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:55.491706   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa077 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:55.491835   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 2
I1002 20:30:55.491876   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:55.492067   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1002 20:30:55.492122   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rwalk tag 0 (20fa076 a69ee0de '') 
I1002 20:30:55.492241   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:55.492322   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa076 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:55.492452   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:55.492536   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa076 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:55.492674   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 2
I1002 20:30:55.492704   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:55.492855   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tread tag 0 fid 1 offset 258 count 262120
I1002 20:30:55.492886   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rread tag 0 count 0
I1002 20:30:55.493029   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 1
I1002 20:30:55.493058   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:55.757122   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 0 newfid 1 0:'test-1759437054175377768' 
I1002 20:30:55.757192   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rwalk tag 0 (20fa078 a69ee0de '') 
I1002 20:30:55.757362   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 1
I1002 20:30:55.757465   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('test-1759437054175377768' 'jenkins' 'balintp' '' q (20fa078 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:55.757621   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 1 newfid 2 
I1002 20:30:55.757695   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rwalk tag 0 
I1002 20:30:55.757830   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Topen tag 0 fid 2 mode 0
I1002 20:30:55.757878   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Ropen tag 0 qid (20fa078 a69ee0de '') iounit 0
I1002 20:30:55.758032   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 1
I1002 20:30:55.758143   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('test-1759437054175377768' 'jenkins' 'balintp' '' q (20fa078 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:55.758409   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tread tag 0 fid 2 offset 0 count 24
I1002 20:30:55.758453   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rread tag 0 count 24
I1002 20:30:55.758626   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 2
I1002 20:30:55.758686   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:55.759306   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 1
I1002 20:30:55.759342   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:56.068128   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 0
I1002 20:30:56.068260   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa075 a69ee0de 'd') m d775 at 0 mt 1759437054 l 4096 t 0 d 0 ext )
I1002 20:30:56.068570   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 0 newfid 1 
I1002 20:30:56.068616   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rwalk tag 0 
I1002 20:30:56.068744   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Topen tag 0 fid 1 mode 0
I1002 20:30:56.068795   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Ropen tag 0 qid (20fa075 a69ee0de 'd') iounit 0
I1002 20:30:56.068909   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 0
I1002 20:30:56.069004   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa075 a69ee0de 'd') m d775 at 0 mt 1759437054 l 4096 t 0 d 0 ext )
I1002 20:30:56.069224   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tread tag 0 fid 1 offset 0 count 262120
I1002 20:30:56.069359   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rread tag 0 count 258
I1002 20:30:56.069489   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tread tag 0 fid 1 offset 258 count 261862
I1002 20:30:56.069523   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rread tag 0 count 0
I1002 20:30:56.069672   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tread tag 0 fid 1 offset 258 count 262120
I1002 20:30:56.069723   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rread tag 0 count 0
I1002 20:30:56.069871   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 0 newfid 2 0:'test-1759437054175377768' 
I1002 20:30:56.069955   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rwalk tag 0 (20fa078 a69ee0de '') 
I1002 20:30:56.070078   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:56.070177   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('test-1759437054175377768' 'jenkins' 'balintp' '' q (20fa078 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:56.070306   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:56.070394   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('test-1759437054175377768' 'jenkins' 'balintp' '' q (20fa078 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:56.070499   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 2
I1002 20:30:56.070528   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:56.070634   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1002 20:30:56.070683   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rwalk tag 0 (20fa077 a69ee0de '') 
I1002 20:30:56.070801   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:56.070887   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa077 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:56.071006   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:56.071101   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa077 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:56.071225   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 2
I1002 20:30:56.071249   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:56.071359   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1002 20:30:56.071400   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rwalk tag 0 (20fa076 a69ee0de '') 
I1002 20:30:56.071512   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:56.071591   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa076 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:56.071725   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tstat tag 0 fid 2
I1002 20:30:56.071817   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa076 a69ee0de '') m 644 at 0 mt 1759437054 l 24 t 0 d 0 ext )
I1002 20:30:56.071910   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 2
I1002 20:30:56.071932   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:56.072026   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tread tag 0 fid 1 offset 258 count 262120
I1002 20:30:56.072060   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rread tag 0 count 0
I1002 20:30:56.072176   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 1
I1002 20:30:56.072226   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:56.073158   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1002 20:30:56.073207   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rerror tag 0 ename 'file not found' ecode 0
I1002 20:30:56.341128   55926 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:41028 Tclunk tag 0 fid 0
I1002 20:30:56.341179   55926 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:41028 Rclunk tag 0
I1002 20:30:56.341531   55926 main.go:125] stdlog: ufs.go:147 disconnected
I1002 20:30:56.356200   55926 out.go:179] * Unmounting /mount-9p ...
I1002 20:30:56.357306   55926 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1002 20:30:56.364755   55926 mount.go:180] unmount for /mount-9p ran successfully
I1002 20:30:56.364831   55926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/.mount-process: {Name:mke9e79fec20d41099cb21502b1ba926aa28d6a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 20:30:56.366355   55926 out.go:203] 
W1002 20:30:56.367610   55926 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1002 20:30:56.368613   55926 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1002 20:30:54.190787   12851 retry.go:31] will retry after 1.615221195s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-753218 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-753218 get svc nginx-svc: exit status 1 (46.46992ms)

                                                
                                                
** stderr ** 
	E1002 20:32:45.308062   64049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:32:45.308415   64049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:32:45.309859   64049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:32:45.310163   64049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:32:45.311489   64049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-753218 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image save kicbase/echo-server:functional-753218 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1002 20:30:55.778324   56863 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:30:55.778669   56863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:30:55.778680   56863 out.go:374] Setting ErrFile to fd 2...
	I1002 20:30:55.778686   56863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:30:55.778954   56863 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:30:55.779481   56863 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:30:55.779566   56863 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:30:55.779948   56863 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
	I1002 20:30:55.798361   56863 ssh_runner.go:195] Run: systemctl --version
	I1002 20:30:55.798417   56863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
	I1002 20:30:55.816283   56863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
	I1002 20:30:55.917463   56863 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1002 20:30:55.917545   56863 cache_images.go:254] Failed to load cached images for "functional-753218": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1002 20:30:55.917577   56863 cache_images.go:266] failed pushing to: functional-753218

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-753218
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image save --daemon kicbase/echo-server:functional-753218 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-753218
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-753218: exit status 1 (16.862996ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-753218

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-753218

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-753218 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-753218 create deployment hello-node --image kicbase/echo-server: exit status 1 (45.451749ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-753218 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 service list: exit status 103 (246.004237ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753218 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753218"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-amd64 -p functional-753218 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-753218 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753218\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 service list -o json: exit status 103 (304.693554ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753218 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753218"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-753218 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 service --namespace=default --https --url hello-node: exit status 103 (234.695857ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753218 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753218"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-753218 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 service hello-node --url --format={{.IP}}: exit status 103 (246.639854ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753218 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753218"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-753218 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-753218 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753218\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 service hello-node --url: exit status 103 (244.621586ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753218 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753218"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-753218 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-753218 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-753218"
functional_test.go:1579: failed to parse "* The control-plane node functional-753218 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753218\"": parse "* The control-plane node functional-753218 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753218\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (502.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 20:35:54.133128   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:54.139594   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:54.150936   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:54.172265   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:54.213723   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:54.295212   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:54.456746   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:54.778414   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:55.420450   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:56.702045   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:35:59.264950   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:04.386461   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:14.628147   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:36:35.110150   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:37:16.072601   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:38:37.997074   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:40:54.133165   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:41:21.839330   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (8m20.880381024s)

                                                
                                                
-- stdout --
	* [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	* 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	* 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	* 
	I1002 20:43:18.437263   65735 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (279.644246ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:43:18.770149   70881 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-753218 ssh findmnt -T /mount1                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service        │ functional-753218 service list -o json                                                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service        │ functional-753218 service --namespace=default --https --url hello-node                                          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ service        │ functional-753218 service hello-node --url --format={{.IP}}                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh            │ functional-753218 ssh findmnt -T /mount1                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ service        │ functional-753218 service hello-node --url                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │                     │
	│ ssh            │ functional-753218 ssh findmnt -T /mount2                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ ssh            │ functional-753218 ssh findmnt -T /mount3                                                                        │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:30 UTC │ 02 Oct 25 20:30 UTC │
	│ mount          │ -p functional-753218 --kill=true                                                                                │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start          │ -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start          │ -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio       │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start          │ -p functional-753218 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                 │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-753218 --alsologtostderr -v=1                                                  │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ update-context │ functional-753218 update-context --alsologtostderr -v=2                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ update-context │ functional-753218 update-context --alsologtostderr -v=2                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ update-context │ functional-753218 update-context --alsologtostderr -v=2                                                         │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image          │ functional-753218 image ls --format short --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ ssh            │ functional-753218 ssh pgrep buildkitd                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ image          │ functional-753218 image ls --format yaml --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image          │ functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image          │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image          │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image          │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete         │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start          │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:43:09 ha-872795 crio[775]: time="2025-10-02T20:43:09.828406348Z" level=info msg="createCtr: deleting container bad4211dc5a289e9694e68fa4aaa894c3c0a5fb60d5c7f1d2cb73c694bf26223 from storage" id=e60848dc-72b8-4938-b886-ba5456a4a3d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:09 ha-872795 crio[775]: time="2025-10-02T20:43:09.830776629Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=830d35d6-3570-4418-8664-e41f53f0a498 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:09 ha-872795 crio[775]: time="2025-10-02T20:43:09.831116604Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=e60848dc-72b8-4938-b886-ba5456a4a3d9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.801400794Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=5f5859a1-7200-4d9e-bd86-46729fa05af4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.802203123Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=8cbb57aa-5b8f-4e9a-9e6f-bba957498c73 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.803065814Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-872795/kube-scheduler" id=12e023e6-f427-4ab6-a2b2-ea5649ef3387 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.803296737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.806623153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.807047696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.822579581Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=12e023e6-f427-4ab6-a2b2-ea5649ef3387 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.82398286Z" level=info msg="createCtr: deleting container ID 887cdbeeb42dcbda9a99778927aacd2c6e998f58f1b0d7ad77b541a8de5d1b31 from idIndex" id=12e023e6-f427-4ab6-a2b2-ea5649ef3387 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.824013614Z" level=info msg="createCtr: removing container 887cdbeeb42dcbda9a99778927aacd2c6e998f58f1b0d7ad77b541a8de5d1b31" id=12e023e6-f427-4ab6-a2b2-ea5649ef3387 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.824042531Z" level=info msg="createCtr: deleting container 887cdbeeb42dcbda9a99778927aacd2c6e998f58f1b0d7ad77b541a8de5d1b31 from storage" id=12e023e6-f427-4ab6-a2b2-ea5649ef3387 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:13 ha-872795 crio[775]: time="2025-10-02T20:43:13.826086983Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=12e023e6-f427-4ab6-a2b2-ea5649ef3387 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.802069297Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=0323f2f0-ea6f-479e-a390-98f443ffe0e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.802991369Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=107b4d0b-91d1-42b2-ae89-8c9466456b1c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.803858666Z" level=info msg="Creating container: kube-system/etcd-ha-872795/etcd" id=686e0d01-95a0-4855-8bbc-8ac5ba00b350 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.804081524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.807574021Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.808120352Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.82411529Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=686e0d01-95a0-4855-8bbc-8ac5ba00b350 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.825464449Z" level=info msg="createCtr: deleting container ID 518791b35755af55bd1b61d9f2057e1b9df9a0ddec86e233b0ad9c4fc8ad9f01 from idIndex" id=686e0d01-95a0-4855-8bbc-8ac5ba00b350 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.825496595Z" level=info msg="createCtr: removing container 518791b35755af55bd1b61d9f2057e1b9df9a0ddec86e233b0ad9c4fc8ad9f01" id=686e0d01-95a0-4855-8bbc-8ac5ba00b350 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.825527693Z" level=info msg="createCtr: deleting container 518791b35755af55bd1b61d9f2057e1b9df9a0ddec86e233b0ad9c4fc8ad9f01 from storage" id=686e0d01-95a0-4855-8bbc-8ac5ba00b350 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:43:18 ha-872795 crio[775]: time="2025-10-02T20:43:18.827883174Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=686e0d01-95a0-4855-8bbc-8ac5ba00b350 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:19.322326    2706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:19.323362    2706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:19.324215    2706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:19.325754    2706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:19.326116    2706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:43:19 up  1:25,  0 user,  load average: 0.20, 0.23, 0.15
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:43:09 ha-872795 kubelet[1957]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:43:09 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:43:09 ha-872795 kubelet[1957]: E1002 20:43:09.832532    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:43:10 ha-872795 kubelet[1957]: E1002 20:43:10.578795    1957 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 20:43:13 ha-872795 kubelet[1957]: E1002 20:43:13.800996    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:43:13 ha-872795 kubelet[1957]: E1002 20:43:13.826345    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:43:13 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:43:13 ha-872795 kubelet[1957]:  > podSandboxID="d10793148ad7216a0ef1666e33973783c7ba51256a58959b454c6b69c2bbe01a"
	Oct 02 20:43:13 ha-872795 kubelet[1957]: E1002 20:43:13.826452    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:43:13 ha-872795 kubelet[1957]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:43:13 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:43:13 ha-872795 kubelet[1957]: E1002 20:43:13.826491    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	Oct 02 20:43:14 ha-872795 kubelet[1957]: E1002 20:43:14.423136    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:43:14 ha-872795 kubelet[1957]: I1002 20:43:14.574855    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:43:14 ha-872795 kubelet[1957]: E1002 20:43:14.575218    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:43:16 ha-872795 kubelet[1957]: E1002 20:43:16.129010    1957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7230cb0ec47  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-872795 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:39:17.792304199 +0000 UTC m=+0.765266556,LastTimestamp:2025-10-02 20:39:17.792304199 +0000 UTC m=+0.765266556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:43:17 ha-872795 kubelet[1957]: E1002 20:43:17.811822    1957 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	Oct 02 20:43:18 ha-872795 kubelet[1957]: E1002 20:43:18.801525    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:43:18 ha-872795 kubelet[1957]: E1002 20:43:18.828179    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:43:18 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:43:18 ha-872795 kubelet[1957]:  > podSandboxID="a3772fd373a48cf1fd0c1e339ac227ef43a36813ee71e966f4b33f85ee2aa32d"
	Oct 02 20:43:18 ha-872795 kubelet[1957]: E1002 20:43:18.828271    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:43:18 ha-872795 kubelet[1957]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:43:18 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:43:18 ha-872795 kubelet[1957]: E1002 20:43:18.828300    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (280.563412ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:43:19.677343   71206 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (502.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (98.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (87.248082ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-872795" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- rollout status deployment/busybox: exit status 1 (86.040242ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (84.525739ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 20:43:19.949339   12851 retry.go:31] will retry after 1.013366609s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (85.080105ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 20:43:21.048742   12851 retry.go:31] will retry after 1.361310244s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (84.533884ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 20:43:22.495920   12851 retry.go:31] will retry after 1.253461732s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (84.522143ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 20:43:23.835023   12851 retry.go:31] will retry after 4.923471316s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (85.744373ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 20:43:28.848958   12851 retry.go:31] will retry after 6.490528651s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (88.191396ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 20:43:35.428792   12851 retry.go:31] will retry after 9.307177288s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (87.328317ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 20:43:44.823688   12851 retry.go:31] will retry after 13.38833048s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (86.069606ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 20:43:58.301747   12851 retry.go:31] will retry after 21.917716538s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (86.659468ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 20:44:20.311806   12851 retry.go:31] will retry after 36.432104566s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (88.2484ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (85.058374ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- exec  -- nslookup kubernetes.io: exit status 1 (84.524912ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- exec  -- nslookup kubernetes.default: exit status 1 (87.249836ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (86.0627ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (277.553982ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:44:57.461022   72182 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image ls --format short --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ ssh     │ functional-753218 ssh pgrep buildkitd                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ image   │ functional-753218 image ls --format yaml --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete  │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.807981244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.808536306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.809808803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.810257214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.823811065Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.825019605Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.825202406Z" level=info msg="createCtr: deleting container ID dcd029b90758f65c5b06f8e6b1b8b4c1db17ace1d11582b4db6106dfd1034230 from idIndex" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.825238957Z" level=info msg="createCtr: removing container dcd029b90758f65c5b06f8e6b1b8b4c1db17ace1d11582b4db6106dfd1034230" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.825277088Z" level=info msg="createCtr: deleting container dcd029b90758f65c5b06f8e6b1b8b4c1db17ace1d11582b4db6106dfd1034230 from storage" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.826300407Z" level=info msg="createCtr: deleting container ID 138a6dabf63007d1b4a2c0acc749aa56f5ab1e459e19ff534e44fab9b2d43fa0 from idIndex" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.826328555Z" level=info msg="createCtr: removing container 138a6dabf63007d1b4a2c0acc749aa56f5ab1e459e19ff534e44fab9b2d43fa0" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.826356719Z" level=info msg="createCtr: deleting container 138a6dabf63007d1b4a2c0acc749aa56f5ab1e459e19ff534e44fab9b2d43fa0 from storage" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.827919811Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.830031075Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.801883479Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=57eb11f2-ebce-4ac7-9758-5513f4d13809 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.802810009Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5e092a64-a9db-46ea-a6f8-75a15c4871a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.803693911Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-872795/kube-apiserver" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.803901947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.807281855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.8076945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.825644838Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827042779Z" level=info msg="createCtr: deleting container ID 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from idIndex" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827075909Z" level=info msg="createCtr: removing container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827105892Z" level=info msg="createCtr: deleting container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from storage" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.829276684Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:44:58.017778    3044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:44:58.018298    3044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:44:58.020075    3044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:44:58.020521    3044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:44:58.022086    3044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:44:58 up  1:27,  0 user,  load average: 0.90, 0.48, 0.25
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:44:50 ha-872795 kubelet[1957]:  > podSandboxID="079790ff593aadbe100b150a42b87bda86092d1fcff86f8774566f658d455d0a"
	Oct 02 20:44:50 ha-872795 kubelet[1957]: E1002 20:44:50.828643    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:50 ha-872795 kubelet[1957]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:50 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:50 ha-872795 kubelet[1957]: E1002 20:44:50.828711    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:44:50 ha-872795 kubelet[1957]: E1002 20:44:50.830236    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:50 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:50 ha-872795 kubelet[1957]:  > podSandboxID="a3772fd373a48cf1fd0c1e339ac227ef43a36813ee71e966f4b33f85ee2aa32d"
	Oct 02 20:44:50 ha-872795 kubelet[1957]: E1002 20:44:50.830326    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:50 ha-872795 kubelet[1957]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:50 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:50 ha-872795 kubelet[1957]: E1002 20:44:50.830360    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.437895    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: I1002 20:44:52.603095    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.603495    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.801483    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829535    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > podSandboxID="ef6e539a74c14f8dbdf14c5253393747982dcd66fa486356dbfe446d81891de8"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829641    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829696    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:44:56 ha-872795 kubelet[1957]: E1002 20:44:56.709315    1957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7230cb11fa8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,LastTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:44:57 ha-872795 kubelet[1957]: E1002 20:44:57.817829    1957 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (283.838562ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:44:58.378198   72505 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (98.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (87.449494ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-872795"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (278.940171ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:44:58.764110   72650 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-753218 ssh pgrep buildkitd                                                                           │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ image   │ functional-753218 image ls --format yaml --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete  │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.807981244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.808536306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.809808803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.810257214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.823811065Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.825019605Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.825202406Z" level=info msg="createCtr: deleting container ID dcd029b90758f65c5b06f8e6b1b8b4c1db17ace1d11582b4db6106dfd1034230 from idIndex" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.825238957Z" level=info msg="createCtr: removing container dcd029b90758f65c5b06f8e6b1b8b4c1db17ace1d11582b4db6106dfd1034230" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.825277088Z" level=info msg="createCtr: deleting container dcd029b90758f65c5b06f8e6b1b8b4c1db17ace1d11582b4db6106dfd1034230 from storage" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.826300407Z" level=info msg="createCtr: deleting container ID 138a6dabf63007d1b4a2c0acc749aa56f5ab1e459e19ff534e44fab9b2d43fa0 from idIndex" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.826328555Z" level=info msg="createCtr: removing container 138a6dabf63007d1b4a2c0acc749aa56f5ab1e459e19ff534e44fab9b2d43fa0" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.826356719Z" level=info msg="createCtr: deleting container 138a6dabf63007d1b4a2c0acc749aa56f5ab1e459e19ff534e44fab9b2d43fa0 from storage" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.827919811Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.830031075Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.801883479Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=57eb11f2-ebce-4ac7-9758-5513f4d13809 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.802810009Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5e092a64-a9db-46ea-a6f8-75a15c4871a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.803693911Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-872795/kube-apiserver" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.803901947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.807281855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.8076945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.825644838Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827042779Z" level=info msg="createCtr: deleting container ID 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from idIndex" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827075909Z" level=info msg="createCtr: removing container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827105892Z" level=info msg="createCtr: deleting container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from storage" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.829276684Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:44:59.318861    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:44:59.319379    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:44:59.320989    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:44:59.321401    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:44:59.322904    3204 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:44:59 up  1:27,  0 user,  load average: 0.90, 0.48, 0.25
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:44:50 ha-872795 kubelet[1957]:  > podSandboxID="079790ff593aadbe100b150a42b87bda86092d1fcff86f8774566f658d455d0a"
	Oct 02 20:44:50 ha-872795 kubelet[1957]: E1002 20:44:50.828643    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:50 ha-872795 kubelet[1957]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:50 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:50 ha-872795 kubelet[1957]: E1002 20:44:50.828711    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:44:50 ha-872795 kubelet[1957]: E1002 20:44:50.830236    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:50 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:50 ha-872795 kubelet[1957]:  > podSandboxID="a3772fd373a48cf1fd0c1e339ac227ef43a36813ee71e966f4b33f85ee2aa32d"
	Oct 02 20:44:50 ha-872795 kubelet[1957]: E1002 20:44:50.830326    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:50 ha-872795 kubelet[1957]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:50 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:50 ha-872795 kubelet[1957]: E1002 20:44:50.830360    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.437895    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: I1002 20:44:52.603095    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.603495    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.801483    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829535    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > podSandboxID="ef6e539a74c14f8dbdf14c5253393747982dcd66fa486356dbfe446d81891de8"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829641    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829696    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:44:56 ha-872795 kubelet[1957]: E1002 20:44:56.709315    1957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7230cb11fa8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,LastTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:44:57 ha-872795 kubelet[1957]: E1002 20:44:57.817829    1957 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (278.490622ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:44:59.671737   72977 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (1.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 node add --alsologtostderr -v 5: exit status 103 (240.431121ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-872795 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-872795"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:44:59.725262   73089 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:44:59.725559   73089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:44:59.725570   73089 out.go:374] Setting ErrFile to fd 2...
	I1002 20:44:59.725577   73089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:44:59.725765   73089 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:44:59.726064   73089 mustload.go:65] Loading cluster: ha-872795
	I1002 20:44:59.726407   73089 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:44:59.726834   73089 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:44:59.743535   73089 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:44:59.743788   73089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:44:59.795500   73089 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:44:59.786273107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:44:59.795673   73089 api_server.go:166] Checking apiserver status ...
	I1002 20:44:59.795720   73089 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:44:59.795765   73089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:44:59.813919   73089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	W1002 20:44:59.917334   73089 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:44:59.919849   73089 out.go:179] * The control-plane node ha-872795 apiserver is not running: (state=Stopped)
	I1002 20:44:59.921311   73089 out.go:179]   To start a cluster, run: "minikube start -p ha-872795"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-872795 node add --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (281.285051ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:00.212503   73215 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image ls --format yaml --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete  │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.826356719Z" level=info msg="createCtr: deleting container 138a6dabf63007d1b4a2c0acc749aa56f5ab1e459e19ff534e44fab9b2d43fa0 from storage" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.827919811Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.830031075Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.801883479Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=57eb11f2-ebce-4ac7-9758-5513f4d13809 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.802810009Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5e092a64-a9db-46ea-a6f8-75a15c4871a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.803693911Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-872795/kube-apiserver" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.803901947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.807281855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.8076945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.825644838Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827042779Z" level=info msg="createCtr: deleting container ID 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from idIndex" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827075909Z" level=info msg="createCtr: removing container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827105892Z" level=info msg="createCtr: deleting container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from storage" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.829276684Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.801275985Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=4fca0a49-1065-4871-9912-9e712931de4a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.802205974Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3341d1d0-63dd-4440-96cb-dbe32f52fd15 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.80316376Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-872795/kube-scheduler" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.803420994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.806973431Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.807354248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.819369479Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820888398Z" level=info msg="createCtr: deleting container ID 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533 from idIndex" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820923326Z" level=info msg="createCtr: removing container 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820960943Z" level=info msg="createCtr: deleting container 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533 from storage" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.823048862Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:45:00.767307    3377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:00.767773    3377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:00.769332    3377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:00.769758    3377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:00.771247    3377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:45:00 up  1:27,  0 user,  load average: 0.90, 0.48, 0.25
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.437895    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: I1002 20:44:52.603095    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.603495    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.801483    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829535    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > podSandboxID="ef6e539a74c14f8dbdf14c5253393747982dcd66fa486356dbfe446d81891de8"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829641    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829696    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:44:56 ha-872795 kubelet[1957]: E1002 20:44:56.709315    1957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7230cb11fa8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,LastTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:44:57 ha-872795 kubelet[1957]: E1002 20:44:57.817829    1957 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.401703    1957 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.439050    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: I1002 20:44:59.605385    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.605749    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.800841    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823342    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:59 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:59 ha-872795 kubelet[1957]:  > podSandboxID="d10793148ad7216a0ef1666e33973783c7ba51256a58959b454c6b69c2bbe01a"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823436    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:59 ha-872795 kubelet[1957]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:59 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823467    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (284.492118ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:01.126584   73543 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (1.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-872795 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-872795 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (43.706025ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-872795

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-872795 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-872795 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (281.690049ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:01.471701   73674 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image ls --format yaml --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete  │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.826356719Z" level=info msg="createCtr: deleting container 138a6dabf63007d1b4a2c0acc749aa56f5ab1e459e19ff534e44fab9b2d43fa0 from storage" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.827919811Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.830031075Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.801883479Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=57eb11f2-ebce-4ac7-9758-5513f4d13809 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.802810009Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5e092a64-a9db-46ea-a6f8-75a15c4871a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.803693911Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-872795/kube-apiserver" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.803901947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.807281855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.8076945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.825644838Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827042779Z" level=info msg="createCtr: deleting container ID 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from idIndex" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827075909Z" level=info msg="createCtr: removing container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827105892Z" level=info msg="createCtr: deleting container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from storage" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.829276684Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.801275985Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=4fca0a49-1065-4871-9912-9e712931de4a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.802205974Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3341d1d0-63dd-4440-96cb-dbe32f52fd15 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.80316376Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-872795/kube-scheduler" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.803420994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.806973431Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.807354248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.819369479Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820888398Z" level=info msg="createCtr: deleting container ID 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533 from idIndex" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820923326Z" level=info msg="createCtr: removing container 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820960943Z" level=info msg="createCtr: deleting container 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533 from storage" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.823048862Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:45:02.037882    3537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:02.038403    3537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:02.040011    3537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:02.040432    3537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:02.041913    3537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:45:02 up  1:27,  0 user,  load average: 0.90, 0.48, 0.25
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.437895    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: I1002 20:44:52.603095    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.603495    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.801483    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829535    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > podSandboxID="ef6e539a74c14f8dbdf14c5253393747982dcd66fa486356dbfe446d81891de8"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829641    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829696    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:44:56 ha-872795 kubelet[1957]: E1002 20:44:56.709315    1957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7230cb11fa8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,LastTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:44:57 ha-872795 kubelet[1957]: E1002 20:44:57.817829    1957 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.401703    1957 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.439050    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: I1002 20:44:59.605385    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.605749    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.800841    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823342    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:59 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:59 ha-872795 kubelet[1957]:  > podSandboxID="d10793148ad7216a0ef1666e33973783c7ba51256a58959b454c6b69c2bbe01a"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823436    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:59 ha-872795 kubelet[1957]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:59 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823467    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (283.260296ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:02.404264   73997 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-872795" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-872795\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-872795\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-872795\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-872795" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-872795\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-872795\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-872795\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (274.587567ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:02.995481   74239 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image ls --format yaml --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete  │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.826356719Z" level=info msg="createCtr: deleting container 138a6dabf63007d1b4a2c0acc749aa56f5ab1e459e19ff534e44fab9b2d43fa0 from storage" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.827919811Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=41ed485b-12d3-411c-967a-896b2c0fdab3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:50 ha-872795 crio[775]: time="2025-10-02T20:44:50.830031075Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=9f60a799-18bb-48b0-8c3f-888a44f48b88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.801883479Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=57eb11f2-ebce-4ac7-9758-5513f4d13809 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.802810009Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5e092a64-a9db-46ea-a6f8-75a15c4871a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.803693911Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-872795/kube-apiserver" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.803901947Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.807281855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.8076945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.825644838Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827042779Z" level=info msg="createCtr: deleting container ID 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from idIndex" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827075909Z" level=info msg="createCtr: removing container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827105892Z" level=info msg="createCtr: deleting container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from storage" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.829276684Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.801275985Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=4fca0a49-1065-4871-9912-9e712931de4a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.802205974Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3341d1d0-63dd-4440-96cb-dbe32f52fd15 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.80316376Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-872795/kube-scheduler" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.803420994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.806973431Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.807354248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.819369479Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820888398Z" level=info msg="createCtr: deleting container ID 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533 from idIndex" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820923326Z" level=info msg="createCtr: removing container 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820960943Z" level=info msg="createCtr: deleting container 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533 from storage" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.823048862Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:45:03.545165    3706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:03.545724    3706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:03.547213    3706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:03.547636    3706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:03.549177    3706 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:45:03 up  1:27,  0 user,  load average: 0.83, 0.47, 0.25
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.437895    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: I1002 20:44:52.603095    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.603495    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.801483    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829535    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > podSandboxID="ef6e539a74c14f8dbdf14c5253393747982dcd66fa486356dbfe446d81891de8"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829641    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829696    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:44:56 ha-872795 kubelet[1957]: E1002 20:44:56.709315    1957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7230cb11fa8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,LastTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:44:57 ha-872795 kubelet[1957]: E1002 20:44:57.817829    1957 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.401703    1957 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.439050    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: I1002 20:44:59.605385    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.605749    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.800841    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823342    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:59 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:59 ha-872795 kubelet[1957]:  > podSandboxID="d10793148ad7216a0ef1666e33973783c7ba51256a58959b454c6b69c2bbe01a"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823436    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:59 ha-872795 kubelet[1957]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:59 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823467    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (285.006257ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:03.903380   74562 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --output json --alsologtostderr -v 5: exit status 6 (281.428941ms)

                                                
                                                
-- stdout --
	{"Name":"ha-872795","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:03.956775   74679 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:03.957012   74679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:03.957021   74679 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:03.957024   74679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:03.957209   74679 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:03.957357   74679 out.go:368] Setting JSON to true
	I1002 20:45:03.957379   74679 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:03.957421   74679 notify.go:221] Checking for updates...
	I1002 20:45:03.957710   74679 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:03.957725   74679 status.go:174] checking status of ha-872795 ...
	I1002 20:45:03.958122   74679 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:03.976202   74679 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:03.976238   74679 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:03.976511   74679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:03.993218   74679 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:03.993450   74679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:03.993486   74679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:04.011221   74679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:04.108613   74679 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:04.114898   74679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:04.126491   74679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:04.184504   74679 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:04.175260195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:04.185000   74679 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:04.185029   74679 api_server.go:166] Checking apiserver status ...
	I1002 20:45:04.185069   74679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:04.195195   74679 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:04.195218   74679 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:04.195229   74679 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-872795 status --output json --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (280.194051ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:04.484596   74800 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image ls --format yaml --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete  │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827075909Z" level=info msg="createCtr: removing container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.827105892Z" level=info msg="createCtr: deleting container 3e8152e866fa071f321cf012566678af0c3b90587478627e33a052a71fe906e3 from storage" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:52 ha-872795 crio[775]: time="2025-10-02T20:44:52.829276684Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=7c71c313-36a4-43c5-b2c8-964a37b6c95c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.801275985Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=4fca0a49-1065-4871-9912-9e712931de4a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.802205974Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3341d1d0-63dd-4440-96cb-dbe32f52fd15 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.80316376Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-872795/kube-scheduler" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.803420994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.806973431Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.807354248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.819369479Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820888398Z" level=info msg="createCtr: deleting container ID 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533 from idIndex" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820923326Z" level=info msg="createCtr: removing container 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.820960943Z" level=info msg="createCtr: deleting container 0bf698f0873c641479ec231acbff24b3fb290183b302d4429c9a7853f9485533 from storage" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:44:59 ha-872795 crio[775]: time="2025-10-02T20:44:59.823048862Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=ab29dfb1-3bde-4846-b92d-fcf83ff81378 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.802053178Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=ccd568a2-6589-4c80-82b7-4164cdcfc2e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.802938558Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a9691fff-2245-4447-8e3b-31de8ca01c75 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.803764175Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-872795/kube-controller-manager" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.803941272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.807189375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.807564321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.823732172Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.825048078Z" level=info msg="createCtr: deleting container ID 6c3a4ea489abed9b552b579e7db3d40efdb4c204cf679ef9ef040a14cf57dce9 from idIndex" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.825091575Z" level=info msg="createCtr: removing container 6c3a4ea489abed9b552b579e7db3d40efdb4c204cf679ef9ef040a14cf57dce9" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.825127683Z" level=info msg="createCtr: deleting container 6c3a4ea489abed9b552b579e7db3d40efdb4c204cf679ef9ef040a14cf57dce9 from storage" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.827376242Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:45:05.038293    3880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:05.038852    3880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:05.040421    3880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:05.040888    3880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:05.042174    3880 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:45:05 up  1:27,  0 user,  load average: 0.83, 0.47, 0.25
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:44:52 ha-872795 kubelet[1957]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:52 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:52 ha-872795 kubelet[1957]: E1002 20:44:52.829696    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:44:56 ha-872795 kubelet[1957]: E1002 20:44:56.709315    1957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7230cb11fa8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,LastTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:44:57 ha-872795 kubelet[1957]: E1002 20:44:57.817829    1957 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.401703    1957 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.439050    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: I1002 20:44:59.605385    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.605749    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.800841    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823342    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:44:59 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:59 ha-872795 kubelet[1957]:  > podSandboxID="d10793148ad7216a0ef1666e33973783c7ba51256a58959b454c6b69c2bbe01a"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823436    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:44:59 ha-872795 kubelet[1957]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:44:59 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:44:59 ha-872795 kubelet[1957]: E1002 20:44:59.823467    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	Oct 02 20:45:03 ha-872795 kubelet[1957]: E1002 20:45:03.801623    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:45:03 ha-872795 kubelet[1957]: E1002 20:45:03.827679    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:45:03 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:03 ha-872795 kubelet[1957]:  > podSandboxID="079790ff593aadbe100b150a42b87bda86092d1fcff86f8774566f658d455d0a"
	Oct 02 20:45:03 ha-872795 kubelet[1957]: E1002 20:45:03.827784    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:03 ha-872795 kubelet[1957]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:03 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:03 ha-872795 kubelet[1957]: E1002 20:45:03.827820    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (288.938377ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:05.405249   75138 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 node stop m02 --alsologtostderr -v 5: exit status 85 (54.362521ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:05.461077   75248 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:05.461363   75248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:05.461374   75248 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:05.461378   75248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:05.461554   75248 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:05.461809   75248 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:05.462111   75248 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:05.463938   75248 out.go:203] 
	W1002 20:45:05.465127   75248 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1002 20:45:05.465139   75248 out.go:285] * 
	* 
	W1002 20:45:05.468312   75248 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:45:05.469519   75248 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-872795 node stop m02 --alsologtostderr -v 5": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 6 (278.846308ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:05.513323   75259 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:05.513415   75259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:05.513426   75259 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:05.513433   75259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:05.513635   75259 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:05.513847   75259 out.go:368] Setting JSON to false
	I1002 20:45:05.513873   75259 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:05.513926   75259 notify.go:221] Checking for updates...
	I1002 20:45:05.514262   75259 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:05.514294   75259 status.go:174] checking status of ha-872795 ...
	I1002 20:45:05.514716   75259 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:05.532563   75259 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:05.532607   75259 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:05.532883   75259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:05.552057   75259 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:05.552342   75259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:05.552377   75259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:05.570658   75259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:05.668764   75259 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:05.674744   75259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:05.686401   75259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:05.738471   75259 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:05.728974867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:05.738910   75259 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:05.738934   75259 api_server.go:166] Checking apiserver status ...
	I1002 20:45:05.738970   75259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:05.748781   75259 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:05.748801   75259 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:05.748812   75259 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (288.686631ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:06.046180   75381 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete  │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                                                                  │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.825091575Z" level=info msg="createCtr: removing container 6c3a4ea489abed9b552b579e7db3d40efdb4c204cf679ef9ef040a14cf57dce9" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.825127683Z" level=info msg="createCtr: deleting container 6c3a4ea489abed9b552b579e7db3d40efdb4c204cf679ef9ef040a14cf57dce9 from storage" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.827376242Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.802534912Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=618e962e-dd71-4480-9eb0-cca3236a192c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.802678466Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1ec449fd-f9d8-4fdf-808c-0394de277a28 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.803436534Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=eb11d595-5487-48ae-bc2b-bad9d7754c27 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.803509056Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=40a87680-0bd6-4405-bac9-4206c84a827d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.804418555Z" level=info msg="Creating container: kube-system/etcd-ha-872795/etcd" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.804473062Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-872795/kube-apiserver" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.804630538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.804726597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.81004713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.810558939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.811450856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.812003578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.828858261Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.83026621Z" level=info msg="createCtr: deleting container ID fecd6ab9a7d63c0fa93c4b37ffec768908f24b3263ae1b7588327ae212faacaa from idIndex" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.830302069Z" level=info msg="createCtr: removing container fecd6ab9a7d63c0fa93c4b37ffec768908f24b3263ae1b7588327ae212faacaa" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.83033202Z" level=info msg="createCtr: deleting container fecd6ab9a7d63c0fa93c4b37ffec768908f24b3263ae1b7588327ae212faacaa from storage" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.830511228Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.831991725Z" level=info msg="createCtr: deleting container ID e37fdcff3c307af69c33b435964236091cfe37a0b152856ca38b47dba5025bc3 from idIndex" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.83206365Z" level=info msg="createCtr: removing container e37fdcff3c307af69c33b435964236091cfe37a0b152856ca38b47dba5025bc3" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.832106279Z" level=info msg="createCtr: deleting container e37fdcff3c307af69c33b435964236091cfe37a0b152856ca38b47dba5025bc3 from storage" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.834976523Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.835394951Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:45:06.599809    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:06.600291    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:06.601856    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:06.602235    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:06.603730    4059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:45:06 up  1:27,  0 user,  load average: 0.83, 0.47, 0.25
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:45:03 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:03 ha-872795 kubelet[1957]:  > podSandboxID="079790ff593aadbe100b150a42b87bda86092d1fcff86f8774566f658d455d0a"
	Oct 02 20:45:03 ha-872795 kubelet[1957]: E1002 20:45:03.827784    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:03 ha-872795 kubelet[1957]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:03 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:03 ha-872795 kubelet[1957]: E1002 20:45:03.827820    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.802115    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.802247    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.835282    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:45:05 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:05 ha-872795 kubelet[1957]:  > podSandboxID="ef6e539a74c14f8dbdf14c5253393747982dcd66fa486356dbfe446d81891de8"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.835403    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:05 ha-872795 kubelet[1957]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:05 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.835446    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.835668    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:45:05 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:05 ha-872795 kubelet[1957]:  > podSandboxID="a3772fd373a48cf1fd0c1e339ac227ef43a36813ee71e966f4b33f85ee2aa32d"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.835775    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:05 ha-872795 kubelet[1957]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:05 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.836945    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	Oct 02 20:45:06 ha-872795 kubelet[1957]: E1002 20:45:06.439939    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:45:06 ha-872795 kubelet[1957]: I1002 20:45:06.607206    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:45:06 ha-872795 kubelet[1957]: E1002 20:45:06.607552    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (285.92303ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:06.962228   75711 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-872795" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-872795\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-872795\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-872795\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (276.416609ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:07.559615   75958 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr          │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete  │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                                                                  │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.825091575Z" level=info msg="createCtr: removing container 6c3a4ea489abed9b552b579e7db3d40efdb4c204cf679ef9ef040a14cf57dce9" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.825127683Z" level=info msg="createCtr: deleting container 6c3a4ea489abed9b552b579e7db3d40efdb4c204cf679ef9ef040a14cf57dce9 from storage" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:03 ha-872795 crio[775]: time="2025-10-02T20:45:03.827376242Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=885a043f-6e0d-4a9f-b305-2e99d4a90b3a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.802534912Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=618e962e-dd71-4480-9eb0-cca3236a192c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.802678466Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=1ec449fd-f9d8-4fdf-808c-0394de277a28 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.803436534Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=eb11d595-5487-48ae-bc2b-bad9d7754c27 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.803509056Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=40a87680-0bd6-4405-bac9-4206c84a827d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.804418555Z" level=info msg="Creating container: kube-system/etcd-ha-872795/etcd" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.804473062Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-872795/kube-apiserver" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.804630538Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.804726597Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.81004713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.810558939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.811450856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.812003578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.828858261Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.83026621Z" level=info msg="createCtr: deleting container ID fecd6ab9a7d63c0fa93c4b37ffec768908f24b3263ae1b7588327ae212faacaa from idIndex" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.830302069Z" level=info msg="createCtr: removing container fecd6ab9a7d63c0fa93c4b37ffec768908f24b3263ae1b7588327ae212faacaa" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.83033202Z" level=info msg="createCtr: deleting container fecd6ab9a7d63c0fa93c4b37ffec768908f24b3263ae1b7588327ae212faacaa from storage" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.830511228Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.831991725Z" level=info msg="createCtr: deleting container ID e37fdcff3c307af69c33b435964236091cfe37a0b152856ca38b47dba5025bc3 from idIndex" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.83206365Z" level=info msg="createCtr: removing container e37fdcff3c307af69c33b435964236091cfe37a0b152856ca38b47dba5025bc3" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.832106279Z" level=info msg="createCtr: deleting container e37fdcff3c307af69c33b435964236091cfe37a0b152856ca38b47dba5025bc3 from storage" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.834976523Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=f5e91f44-78a7-4db3-9b84-4a32b77b50a2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:05 ha-872795 crio[775]: time="2025-10-02T20:45:05.835394951Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=db25fa85-a0e1-46b4-b318-112b1eff31ab name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:45:08.111839    4231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:08.112330    4231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:08.113949    4231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:08.114349    4231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:08.115842    4231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:45:08 up  1:27,  0 user,  load average: 0.84, 0.48, 0.25
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:45:03 ha-872795 kubelet[1957]: E1002 20:45:03.827784    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:03 ha-872795 kubelet[1957]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:03 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:03 ha-872795 kubelet[1957]: E1002 20:45:03.827820    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.802115    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.802247    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.835282    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:45:05 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:05 ha-872795 kubelet[1957]:  > podSandboxID="ef6e539a74c14f8dbdf14c5253393747982dcd66fa486356dbfe446d81891de8"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.835403    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:05 ha-872795 kubelet[1957]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:05 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.835446    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.835668    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:45:05 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:05 ha-872795 kubelet[1957]:  > podSandboxID="a3772fd373a48cf1fd0c1e339ac227ef43a36813ee71e966f4b33f85ee2aa32d"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.835775    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:05 ha-872795 kubelet[1957]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:05 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:05 ha-872795 kubelet[1957]: E1002 20:45:05.836945    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	Oct 02 20:45:06 ha-872795 kubelet[1957]: E1002 20:45:06.439939    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:45:06 ha-872795 kubelet[1957]: I1002 20:45:06.607206    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:45:06 ha-872795 kubelet[1957]: E1002 20:45:06.607552    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:45:06 ha-872795 kubelet[1957]: E1002 20:45:06.710513    1957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7230cb11fa8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,LastTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:45:07 ha-872795 kubelet[1957]: E1002 20:45:07.818120    1957 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (279.434371ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:08.467953   76281 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 node start m02 --alsologtostderr -v 5: exit status 85 (54.683635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:08.522107   76392 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:08.522407   76392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:08.522418   76392 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:08.522422   76392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:08.522617   76392 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:08.522894   76392 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:08.523212   76392 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:08.525226   76392 out.go:203] 
	W1002 20:45:08.526503   76392 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1002 20:45:08.526515   76392 out.go:285] * 
	* 
	W1002 20:45:08.529632   76392 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:45:08.531291   76392 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1002 20:45:08.522107   76392 out.go:360] Setting OutFile to fd 1 ...
I1002 20:45:08.522407   76392 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:45:08.522418   76392 out.go:374] Setting ErrFile to fd 2...
I1002 20:45:08.522422   76392 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:45:08.522617   76392 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
I1002 20:45:08.522894   76392 mustload.go:65] Loading cluster: ha-872795
I1002 20:45:08.523212   76392 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:45:08.525226   76392 out.go:203] 
W1002 20:45:08.526503   76392 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1002 20:45:08.526515   76392 out.go:285] * 
* 
W1002 20:45:08.529632   76392 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1002 20:45:08.531291   76392 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-872795 node start m02 --alsologtostderr -v 5": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 6 (279.407822ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:08.576142   76403 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:08.576406   76403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:08.576416   76403 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:08.576420   76403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:08.576608   76403 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:08.576798   76403 out.go:368] Setting JSON to false
	I1002 20:45:08.576822   76403 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:08.576938   76403 notify.go:221] Checking for updates...
	I1002 20:45:08.577156   76403 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:08.577171   76403 status.go:174] checking status of ha-872795 ...
	I1002 20:45:08.577573   76403 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:08.597446   76403 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:08.597468   76403 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:08.597753   76403 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:08.615678   76403 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:08.615903   76403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:08.615953   76403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:08.632860   76403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:08.730622   76403 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:08.736759   76403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:08.748384   76403 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:08.801027   76403 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:08.790958876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:08.801409   76403 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:08.801436   76403 api_server.go:166] Checking apiserver status ...
	I1002 20:45:08.801469   76403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:08.811420   76403 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:08.811450   76403 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:08.811460   76403 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 20:45:08.815713   12851 retry.go:31] will retry after 749.450196ms: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 6 (274.317145ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:09.605555   76513 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:09.605811   76513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:09.605820   76513 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:09.605824   76513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:09.606018   76513 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:09.606186   76513 out.go:368] Setting JSON to false
	I1002 20:45:09.606207   76513 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:09.606281   76513 notify.go:221] Checking for updates...
	I1002 20:45:09.606551   76513 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:09.606564   76513 status.go:174] checking status of ha-872795 ...
	I1002 20:45:09.606960   76513 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:09.625318   76513 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:09.625373   76513 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:09.625638   76513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:09.642378   76513 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:09.642692   76513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:09.642746   76513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:09.658969   76513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:09.756470   76513 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:09.762516   76513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:09.774168   76513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:09.826267   76513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:09.816432779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:09.826691   76513 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:09.826718   76513 api_server.go:166] Checking apiserver status ...
	I1002 20:45:09.826759   76513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:09.836365   76513 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:09.836409   76513 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:09.836422   76513 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 20:45:09.840197   12851 retry.go:31] will retry after 1.558315512s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 6 (278.903001ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:11.439292   76641 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:11.439576   76641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:11.439583   76641 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:11.439590   76641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:11.439987   76641 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:11.440232   76641 out.go:368] Setting JSON to false
	I1002 20:45:11.440257   76641 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:11.440365   76641 notify.go:221] Checking for updates...
	I1002 20:45:11.440625   76641 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:11.440641   76641 status.go:174] checking status of ha-872795 ...
	I1002 20:45:11.441130   76641 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:11.460521   76641 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:11.460543   76641 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:11.460783   76641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:11.477071   76641 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:11.477297   76641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:11.477347   76641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:11.493996   76641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:11.591679   76641 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:11.597719   76641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:11.609662   76641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:11.665172   76641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:11.656155777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:11.665748   76641 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:11.665779   76641 api_server.go:166] Checking apiserver status ...
	I1002 20:45:11.665819   76641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:11.675265   76641 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:11.675288   76641 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:11.675299   76641 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 20:45:11.678860   12851 retry.go:31] will retry after 3.135937074s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 6 (276.197464ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:14.856698   76774 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:14.856988   76774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:14.856998   76774 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:14.857003   76774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:14.857178   76774 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:14.857344   76774 out.go:368] Setting JSON to false
	I1002 20:45:14.857365   76774 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:14.857428   76774 notify.go:221] Checking for updates...
	I1002 20:45:14.857692   76774 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:14.857713   76774 status.go:174] checking status of ha-872795 ...
	I1002 20:45:14.858079   76774 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:14.875573   76774 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:14.875600   76774 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:14.875947   76774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:14.892840   76774 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:14.893084   76774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:14.893120   76774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:14.910131   76774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:15.007609   76774 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:15.013600   76774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:15.025600   76774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:15.077414   76774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:15.067276443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:15.077846   76774 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:15.077871   76774 api_server.go:166] Checking apiserver status ...
	I1002 20:45:15.077903   76774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:15.087706   76774 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:15.087733   76774 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:15.087745   76774 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 20:45:15.091917   12851 retry.go:31] will retry after 3.913599651s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 6 (277.308134ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:19.050097   76905 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:19.050366   76905 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:19.050377   76905 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:19.050383   76905 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:19.050607   76905 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:19.050806   76905 out.go:368] Setting JSON to false
	I1002 20:45:19.050833   76905 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:19.050962   76905 notify.go:221] Checking for updates...
	I1002 20:45:19.051192   76905 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:19.051207   76905 status.go:174] checking status of ha-872795 ...
	I1002 20:45:19.051682   76905 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:19.071539   76905 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:19.071566   76905 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:19.071861   76905 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:19.088776   76905 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:19.089046   76905 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:19.089098   76905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:19.105505   76905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:19.202587   76905 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:19.208630   76905 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:19.220516   76905 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:19.272401   76905 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:19.262739316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:19.272850   76905 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:19.272884   76905 api_server.go:166] Checking apiserver status ...
	I1002 20:45:19.272926   76905 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:19.282723   76905 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:19.282744   76905 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:19.282754   76905 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 20:45:19.286506   12851 retry.go:31] will retry after 4.319795923s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 6 (281.300391ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:23.651014   77039 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:23.651252   77039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:23.651261   77039 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:23.651265   77039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:23.651469   77039 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:23.651615   77039 out.go:368] Setting JSON to false
	I1002 20:45:23.651636   77039 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:23.651735   77039 notify.go:221] Checking for updates...
	I1002 20:45:23.651952   77039 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:23.651966   77039 status.go:174] checking status of ha-872795 ...
	I1002 20:45:23.652398   77039 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:23.671214   77039 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:23.671249   77039 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:23.671498   77039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:23.687911   77039 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:23.688292   77039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:23.688350   77039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:23.705313   77039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:23.803564   77039 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:23.809448   77039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:23.821550   77039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:23.876902   77039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:23.866634079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:23.877326   77039 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:23.877356   77039 api_server.go:166] Checking apiserver status ...
	I1002 20:45:23.877398   77039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:23.887307   77039 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:23.887325   77039 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:23.887335   77039 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 20:45:23.891497   12851 retry.go:31] will retry after 8.015588479s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 6 (275.721004ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:31.954750   77202 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:31.954871   77202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:31.954881   77202 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:31.954885   77202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:31.955104   77202 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:31.955267   77202 out.go:368] Setting JSON to false
	I1002 20:45:31.955289   77202 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:31.955433   77202 notify.go:221] Checking for updates...
	I1002 20:45:31.955613   77202 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:31.955625   77202 status.go:174] checking status of ha-872795 ...
	I1002 20:45:31.956076   77202 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:31.974334   77202 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:31.974371   77202 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:31.974670   77202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:31.990712   77202 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:31.990942   77202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:31.990979   77202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:32.007705   77202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:32.105814   77202 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:32.111749   77202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:32.123336   77202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:32.176067   77202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:32.166797175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:32.176446   77202 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:32.176468   77202 api_server.go:166] Checking apiserver status ...
	I1002 20:45:32.176496   77202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:32.186253   77202 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:32.186273   77202 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:32.186286   77202 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 20:45:32.190500   12851 retry.go:31] will retry after 10.671616541s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 6 (280.806871ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:42.911252   77364 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:42.911524   77364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:42.911536   77364 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:42.911540   77364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:42.911737   77364 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:42.911907   77364 out.go:368] Setting JSON to false
	I1002 20:45:42.911930   77364 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:42.912046   77364 notify.go:221] Checking for updates...
	I1002 20:45:42.912255   77364 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:42.912271   77364 status.go:174] checking status of ha-872795 ...
	I1002 20:45:42.912660   77364 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:42.930391   77364 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:42.930412   77364 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:42.930726   77364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:42.947857   77364 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:42.948126   77364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:42.948181   77364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:42.965414   77364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:43.062580   77364 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:43.068591   77364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:43.080147   77364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:43.136439   77364 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:43.1255986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:43.137022   77364 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:43.137065   77364 api_server.go:166] Checking apiserver status ...
	I1002 20:45:43.137119   77364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:43.147057   77364 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:43.147073   77364 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:43.147083   77364 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 20:45:43.151129   12851 retry.go:31] will retry after 11.862450798s: exit status 6
E1002 20:45:54.133334   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 6 (278.4746ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:55.055877   77553 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:55.056123   77553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:55.056131   77553 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:55.056135   77553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:55.056315   77553 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:55.056467   77553 out.go:368] Setting JSON to false
	I1002 20:45:55.056488   77553 mustload.go:65] Loading cluster: ha-872795
	I1002 20:45:55.056583   77553 notify.go:221] Checking for updates...
	I1002 20:45:55.056907   77553 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:55.056925   77553 status.go:174] checking status of ha-872795 ...
	I1002 20:45:55.057433   77553 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:55.076077   77553 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:45:55.076098   77553 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:55.076344   77553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:55.093106   77553 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:45:55.093385   77553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:45:55.093487   77553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:55.109461   77553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:45:55.207568   77553 ssh_runner.go:195] Run: systemctl --version
	I1002 20:45:55.213723   77553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:45:55.225990   77553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:55.279208   77553 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:45:55.269047603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 20:45:55.279629   77553 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:55.279670   77553 api_server.go:166] Checking apiserver status ...
	I1002 20:45:55.279710   77553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:45:55.289323   77553 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:45:55.289341   77553 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:45:55.289351   77553 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (282.252105ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:55.579046   77672 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete  │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                                                                  │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node start m02 --alsologtostderr -v 5                                                                 │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.80789124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.808364531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.809825138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.812284177Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.829118501Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=de23e18f-3a17-4d7b-a8af-e1a02a4334e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.82971957Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=828cc504-98e9-4c1d-9909-ec787a19e362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.830613121Z" level=info msg="createCtr: deleting container ID e5fe47e5c3d4151793509df4103eb0bca8a823792ec2190e4e4516e131fcd986 from idIndex" id=de23e18f-3a17-4d7b-a8af-e1a02a4334e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.830643871Z" level=info msg="createCtr: removing container e5fe47e5c3d4151793509df4103eb0bca8a823792ec2190e4e4516e131fcd986" id=de23e18f-3a17-4d7b-a8af-e1a02a4334e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.830692992Z" level=info msg="createCtr: deleting container e5fe47e5c3d4151793509df4103eb0bca8a823792ec2190e4e4516e131fcd986 from storage" id=de23e18f-3a17-4d7b-a8af-e1a02a4334e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.831086681Z" level=info msg="createCtr: deleting container ID 57291768b61c47a14662319e07a2d4ed84edb16f185722902230ec989d104f37 from idIndex" id=828cc504-98e9-4c1d-9909-ec787a19e362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.831118356Z" level=info msg="createCtr: removing container 57291768b61c47a14662319e07a2d4ed84edb16f185722902230ec989d104f37" id=828cc504-98e9-4c1d-9909-ec787a19e362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.831146297Z" level=info msg="createCtr: deleting container 57291768b61c47a14662319e07a2d4ed84edb16f185722902230ec989d104f37 from storage" id=828cc504-98e9-4c1d-9909-ec787a19e362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.834059809Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=de23e18f-3a17-4d7b-a8af-e1a02a4334e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.834384914Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=828cc504-98e9-4c1d-9909-ec787a19e362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.801176831Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f9e43b14-f9cb-48fc-ba48-ffb17270d28a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.801937722Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=dff6c370-1413-4ce6-986d-5f797225357d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.802817173Z" level=info msg="Creating container: kube-system/etcd-ha-872795/etcd" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.803050039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.806308374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.8067161Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.820030337Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.82143539Z" level=info msg="createCtr: deleting container ID eb4972187a1477b5efce4a58370c16c08af83b9a2b142ce896d86d9ad64f4b1a from idIndex" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.821465254Z" level=info msg="createCtr: removing container eb4972187a1477b5efce4a58370c16c08af83b9a2b142ce896d86d9ad64f4b1a" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.82149363Z" level=info msg="createCtr: deleting container eb4972187a1477b5efce4a58370c16c08af83b9a2b142ce896d86d9ad64f4b1a from storage" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.823812358Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:45:56.124084    4615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:56.124612    4615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:56.126183    4615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:56.126669    4615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:56.128236    4615 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:45:56 up  1:28,  0 user,  load average: 0.46, 0.43, 0.25
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.834387    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:45:52 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:52 ha-872795 kubelet[1957]:  > podSandboxID="ef6e539a74c14f8dbdf14c5253393747982dcd66fa486356dbfe446d81891de8"
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.834487    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:52 ha-872795 kubelet[1957]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:52 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.834529    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.834583    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:45:52 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:52 ha-872795 kubelet[1957]:  > podSandboxID="079790ff593aadbe100b150a42b87bda86092d1fcff86f8774566f658d455d0a"
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.834682    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:52 ha-872795 kubelet[1957]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:52 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.835860    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:45:54 ha-872795 kubelet[1957]: E1002 20:45:54.800766    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:45:54 ha-872795 kubelet[1957]: E1002 20:45:54.824105    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:45:54 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:54 ha-872795 kubelet[1957]:  > podSandboxID="a3772fd373a48cf1fd0c1e339ac227ef43a36813ee71e966f4b33f85ee2aa32d"
	Oct 02 20:45:54 ha-872795 kubelet[1957]: E1002 20:45:54.824194    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:54 ha-872795 kubelet[1957]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:54 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:54 ha-872795 kubelet[1957]: E1002 20:45:54.824240    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	Oct 02 20:45:55 ha-872795 kubelet[1957]: E1002 20:45:55.448926    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:45:55 ha-872795 kubelet[1957]: I1002 20:45:55.620221    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:45:55 ha-872795 kubelet[1957]: E1002 20:45:55.620673    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (281.222841ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:56.481949   77987 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-872795" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-872795\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-872795\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-872795\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-872795" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-872795\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-872795\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-872795\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 66301,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:35:02.679281779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f7265370a58453c840a8192d5da4a6feb4bbbb948c2dbabc9298cbd8189ca6f",
	            "SandboxKey": "/var/run/docker/netns/6f7265370a58",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:33:39:9e:e0:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "632391284aa44c2eb0bec81b03555e3fed7084145763827002664c08b386a588",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 6 (275.02968ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:57.067932   78233 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-753218 image ls --format json --alsologtostderr                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls --format table --alsologtostderr                                                     │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ image   │ functional-753218 image ls                                                                                      │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ delete  │ -p functional-753218                                                                                            │ functional-753218 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │ 02 Oct 25 20:34 UTC │
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                                       │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                                                                  │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node start m02 --alsologtostderr -v 5                                                                 │ ha-872795         │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:34:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:34:57.603077   65735 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:34:57.603305   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603314   65735 out.go:374] Setting ErrFile to fd 2...
	I1002 20:34:57.603317   65735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:34:57.603506   65735 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:34:57.604047   65735 out.go:368] Setting JSON to false
	I1002 20:34:57.604900   65735 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4647,"bootTime":1759432651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:34:57.604978   65735 start.go:140] virtualization: kvm guest
	I1002 20:34:57.606757   65735 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:34:57.608100   65735 notify.go:221] Checking for updates...
	I1002 20:34:57.608137   65735 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:34:57.609378   65735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:34:57.610568   65735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:34:57.611851   65735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:34:57.613234   65735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:34:57.614447   65735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:34:57.615922   65735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:34:57.640606   65735 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:34:57.640699   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.694592   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.684173929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.694720   65735 docker.go:319] overlay module found
	I1002 20:34:57.696786   65735 out.go:179] * Using the docker driver based on user configuration
	I1002 20:34:57.698015   65735 start.go:306] selected driver: docker
	I1002 20:34:57.698032   65735 start.go:936] validating driver "docker" against <nil>
	I1002 20:34:57.698041   65735 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:34:57.698548   65735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:34:57.751429   65735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:34:57.741121046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:34:57.751598   65735 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:34:57.751837   65735 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:34:57.753668   65735 out.go:179] * Using Docker driver with root privileges
	I1002 20:34:57.755151   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:34:57.755225   65735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 20:34:57.755237   65735 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:34:57.755322   65735 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 20:34:57.756619   65735 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:34:57.757986   65735 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:34:57.759335   65735 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:34:57.760371   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:57.760415   65735 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:34:57.760432   65735 cache.go:59] Caching tarball of preloaded images
	I1002 20:34:57.760405   65735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:34:57.760507   65735 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:34:57.760515   65735 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:34:57.760879   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:34:57.760904   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json: {Name:mk498e6817e1668b9a52683e21d0fee45dc5b922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:34:57.779767   65735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:34:57.779785   65735 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:34:57.779798   65735 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:34:57.779817   65735 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:34:57.779902   65735 start.go:365] duration metric: took 70.896µs to acquireMachinesLock for "ha-872795"
	I1002 20:34:57.779922   65735 start.go:94] Provisioning new machine with config: &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:34:57.779985   65735 start.go:126] createHost starting for "" (driver="docker")
	I1002 20:34:57.782551   65735 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 20:34:57.782770   65735 start.go:160] libmachine.API.Create for "ha-872795" (driver="docker")
	I1002 20:34:57.782797   65735 client.go:168] LocalClient.Create starting
	I1002 20:34:57.782848   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 20:34:57.782884   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782899   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.782957   65735 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 20:34:57.782975   65735 main.go:141] libmachine: Decoding PEM data...
	I1002 20:34:57.782985   65735 main.go:141] libmachine: Parsing certificate...
	I1002 20:34:57.783261   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:34:57.799786   65735 cli_runner.go:211] docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:34:57.799838   65735 network_create.go:284] running [docker network inspect ha-872795] to gather additional debugging logs...
	I1002 20:34:57.799850   65735 cli_runner.go:164] Run: docker network inspect ha-872795
	W1002 20:34:57.816003   65735 cli_runner.go:211] docker network inspect ha-872795 returned with exit code 1
	I1002 20:34:57.816028   65735 network_create.go:287] error running [docker network inspect ha-872795]: docker network inspect ha-872795: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-872795 not found
	I1002 20:34:57.816042   65735 network_create.go:289] output of [docker network inspect ha-872795]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-872795 not found
	
	** /stderr **
	I1002 20:34:57.816123   65735 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:34:57.832837   65735 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d0e700}
	I1002 20:34:57.832869   65735 network_create.go:124] attempt to create docker network ha-872795 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:34:57.832917   65735 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-872795 ha-872795
	I1002 20:34:57.888359   65735 network_create.go:108] docker network ha-872795 192.168.49.0/24 created
	I1002 20:34:57.888401   65735 kic.go:121] calculated static IP "192.168.49.2" for the "ha-872795" container
	I1002 20:34:57.888473   65735 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:34:57.905091   65735 cli_runner.go:164] Run: docker volume create ha-872795 --label name.minikube.sigs.k8s.io=ha-872795 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:34:57.922367   65735 oci.go:103] Successfully created a docker volume ha-872795
	I1002 20:34:57.922439   65735 cli_runner.go:164] Run: docker run --rm --name ha-872795-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --entrypoint /usr/bin/test -v ha-872795:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:34:58.293551   65735 oci.go:107] Successfully prepared a docker volume ha-872795
	I1002 20:34:58.293606   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:34:58.293630   65735 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:34:58.293727   65735 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:35:02.570537   65735 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-872795:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.276723115s)
	I1002 20:35:02.570574   65735 kic.go:203] duration metric: took 4.276941565s to extract preloaded images to volume ...
	W1002 20:35:02.570728   65735 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:35:02.570763   65735 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:35:02.570812   65735 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:35:02.622879   65735 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-872795 --name ha-872795 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-872795 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-872795 --network ha-872795 --ip 192.168.49.2 --volume ha-872795:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:35:02.884170   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Running}}
	I1002 20:35:02.901900   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:02.919319   65735 cli_runner.go:164] Run: docker exec ha-872795 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:35:02.965612   65735 oci.go:144] the created container "ha-872795" has a running status.
	I1002 20:35:02.965668   65735 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa...
	I1002 20:35:03.839142   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 20:35:03.839192   65735 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:35:03.863522   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.880799   65735 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:35:03.880817   65735 kic_runner.go:114] Args: [docker exec --privileged ha-872795 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:35:03.926950   65735 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:35:03.943420   65735 machine.go:93] provisionDockerMachine start ...
	I1002 20:35:03.943502   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:03.960150   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:03.960365   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:03.960377   65735 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:35:04.102560   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.102595   65735 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:35:04.102672   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.119770   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.119958   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.119969   65735 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:35:04.269954   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:35:04.270024   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:04.289297   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:04.289532   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:04.289560   65735 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:35:04.431294   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:35:04.431331   65735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:35:04.431373   65735 ubuntu.go:190] setting up certificates
	I1002 20:35:04.431385   65735 provision.go:84] configureAuth start
	I1002 20:35:04.431438   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:04.448348   65735 provision.go:143] copyHostCerts
	I1002 20:35:04.448379   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448406   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:35:04.448415   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:35:04.448488   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:35:04.448574   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448595   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:35:04.448601   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:35:04.448642   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:35:04.448726   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448747   65735 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:35:04.448752   65735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:35:04.448791   65735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:35:04.448862   65735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:35:04.988583   65735 provision.go:177] copyRemoteCerts
	I1002 20:35:04.988660   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:35:04.988705   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.006318   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.107594   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:35:05.107664   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:35:05.126297   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:35:05.126364   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 20:35:05.143480   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:35:05.143534   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:35:05.159876   65735 provision.go:87] duration metric: took 728.473665ms to configureAuth
	I1002 20:35:05.159898   65735 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:35:05.160046   65735 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:35:05.160126   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.177170   65735 main.go:141] libmachine: Using SSH client type: native
	I1002 20:35:05.177376   65735 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 20:35:05.177394   65735 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:35:05.428320   65735 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:35:05.428345   65735 machine.go:96] duration metric: took 1.484903501s to provisionDockerMachine
	I1002 20:35:05.428356   65735 client.go:171] duration metric: took 7.645553848s to LocalClient.Create
	I1002 20:35:05.428375   65735 start.go:168] duration metric: took 7.645605965s to libmachine.API.Create "ha-872795"
	I1002 20:35:05.428384   65735 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:35:05.428397   65735 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:35:05.428457   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:35:05.428518   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.445519   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.548684   65735 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:35:05.552289   65735 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:35:05.552315   65735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:35:05.552328   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:35:05.552389   65735 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:35:05.552506   65735 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:35:05.552523   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:35:05.552690   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:35:05.559922   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:05.579393   65735 start.go:297] duration metric: took 150.9926ms for postStartSetup
	I1002 20:35:05.579737   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.596296   65735 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:35:05.596542   65735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:35:05.596580   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.613097   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.710446   65735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:35:05.714631   65735 start.go:129] duration metric: took 7.934634425s to createHost
	I1002 20:35:05.714669   65735 start.go:84] releasing machines lock for "ha-872795", held for 7.934754949s
	I1002 20:35:05.714730   65735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:35:05.731548   65735 ssh_runner.go:195] Run: cat /version.json
	I1002 20:35:05.731603   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.731636   65735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:35:05.731714   65735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:35:05.749775   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.750794   65735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:35:05.899878   65735 ssh_runner.go:195] Run: systemctl --version
	I1002 20:35:05.906012   65735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:35:05.939030   65735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:35:05.945162   65735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:35:05.945228   65735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:35:05.970822   65735 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:35:05.970843   65735 start.go:496] detecting cgroup driver to use...
	I1002 20:35:05.970881   65735 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:35:05.970920   65735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:35:05.986282   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:35:05.998266   65735 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:35:05.998312   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:35:06.014196   65735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:35:06.031061   65735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:35:06.110898   65735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:35:06.195426   65735 docker.go:234] disabling docker service ...
	I1002 20:35:06.195481   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:35:06.213240   65735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:35:06.225772   65735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:35:06.305091   65735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:35:06.379591   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:35:06.391746   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:35:06.405291   65735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:35:06.405351   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.415561   65735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:35:06.415640   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.425296   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.433995   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.442516   65735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:35:06.450342   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.458541   65735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.471597   65735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:35:06.479764   65735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:35:06.486745   65735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:35:06.493925   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:06.570695   65735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:35:06.674347   65735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:35:06.674398   65735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:35:06.678210   65735 start.go:564] Will wait 60s for crictl version
	I1002 20:35:06.678265   65735 ssh_runner.go:195] Run: which crictl
	I1002 20:35:06.681670   65735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:35:06.705846   65735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:35:06.705915   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.732749   65735 ssh_runner.go:195] Run: crio --version
	I1002 20:35:06.761896   65735 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:35:06.763235   65735 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:35:06.779789   65735 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:35:06.783762   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.793731   65735 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:35:06.793825   65735 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:35:06.793893   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.823605   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.823628   65735 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:35:06.823701   65735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:35:06.848195   65735 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:35:06.848217   65735 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:35:06.848224   65735 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:35:06.848297   65735 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:35:06.848363   65735 ssh_runner.go:195] Run: crio config
	I1002 20:35:06.893288   65735 cni.go:84] Creating CNI manager for ""
	I1002 20:35:06.893308   65735 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:35:06.893324   65735 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:35:06.893344   65735 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:35:06.893447   65735 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:35:06.893469   65735 kube-vip.go:115] generating kube-vip config ...
	I1002 20:35:06.893510   65735 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 20:35:06.905349   65735 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:35:06.905448   65735 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 20:35:06.905495   65735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:35:06.913067   65735 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:35:06.913130   65735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 20:35:06.920364   65735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:35:06.932249   65735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:35:06.947038   65735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:35:06.959316   65735 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 20:35:06.973553   65735 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 20:35:06.977144   65735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:35:06.987569   65735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:35:07.065543   65735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:35:07.088051   65735 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:35:07.088073   65735 certs.go:195] generating shared ca certs ...
	I1002 20:35:07.088093   65735 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.088253   65735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:35:07.088318   65735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:35:07.088333   65735 certs.go:257] generating profile certs ...
	I1002 20:35:07.088394   65735 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:35:07.088419   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt with IP's: []
	I1002 20:35:07.271177   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt ...
	I1002 20:35:07.271216   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt: {Name:mkc9958351da3092e35cf2a0dff49f20b7dda9b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271433   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key ...
	I1002 20:35:07.271450   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key: {Name:mk9049d9b976553e3e7ceaf1eae8281114f2b93c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.271566   65735 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2
	I1002 20:35:07.271586   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 20:35:07.440257   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 ...
	I1002 20:35:07.440291   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2: {Name:mk26eb0eaef560c0eb32f96e5b3e81634dfe2a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440486   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 ...
	I1002 20:35:07.440505   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2: {Name:mkc88fc8c0ccf5f6abaf6c3777b5135914058a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.440621   65735 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:35:07.440775   65735 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.75606ee2 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:35:07.440868   65735 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:35:07.440913   65735 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt with IP's: []
	I1002 20:35:07.579277   65735 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt ...
	I1002 20:35:07.579309   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt: {Name:mk15b2fa98abbfdb48c91ec680acf9fb3d36790e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579497   65735 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key ...
	I1002 20:35:07.579512   65735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key: {Name:mk9a0c724bfce05599dc583fc00cefa3d87ea4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:35:07.579634   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:35:07.579676   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:35:07.579703   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:35:07.579724   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:35:07.579741   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:35:07.579761   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:35:07.579777   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:35:07.579793   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:35:07.579863   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:35:07.579919   65735 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:35:07.579934   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:35:07.579978   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:35:07.580017   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:35:07.580054   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:35:07.580109   65735 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:35:07.580147   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.580168   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.580188   65735 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.580777   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:35:07.598837   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:35:07.615911   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:35:07.632508   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:35:07.649448   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:35:07.666220   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:35:07.682564   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:35:07.698671   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:35:07.715367   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:35:07.733828   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:35:07.750440   65735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:35:07.767319   65735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:35:07.779274   65735 ssh_runner.go:195] Run: openssl version
	I1002 20:35:07.785412   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:35:07.793537   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797120   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.797185   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:35:07.830751   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:35:07.839226   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:35:07.847497   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851214   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.851284   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:35:07.884464   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:35:07.892882   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:35:07.901283   65735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904919   65735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.904989   65735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:35:07.938825   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:35:07.947236   65735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:35:07.950736   65735 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:35:07.950798   65735 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:35:07.950893   65735 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:35:07.950946   65735 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:35:07.977912   65735 cri.go:89] found id: ""
	I1002 20:35:07.977979   65735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:35:07.986292   65735 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:35:07.994921   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:35:07.994975   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:35:08.002478   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:35:08.002496   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:35:08.002543   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:35:08.009971   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:35:08.010024   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:35:08.017237   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:35:08.024502   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:35:08.024556   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:35:08.031907   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.039330   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:35:08.039379   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:35:08.046463   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:35:08.053807   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:35:08.053903   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:35:08.060624   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:35:08.119106   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:35:08.173560   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:39:12.973904   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:39:12.973990   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:39:12.976774   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:12.976818   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:12.976887   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:12.976934   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:12.976995   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:12.977058   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:12.977098   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:12.977140   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:12.977186   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:12.977225   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:12.977267   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:12.977305   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:12.977341   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:12.977406   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:12.977543   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:12.977704   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:12.977780   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:12.979758   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:12.979842   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:12.979923   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:12.980019   65735 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:39:12.980106   65735 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:39:12.980194   65735 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:39:12.980269   65735 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:39:12.980321   65735 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:39:12.980413   65735 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980466   65735 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:39:12.980568   65735 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:39:12.980632   65735 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:39:12.980716   65735 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:39:12.980755   65735 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:39:12.980811   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:12.980857   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:12.980906   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:12.980951   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:12.981048   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:12.981126   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:12.981273   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:12.981350   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:12.983755   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:12.983842   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:12.983938   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:12.984036   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:12.984176   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:12.984257   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:12.984363   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:12.984488   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:12.984545   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:12.984753   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:12.984854   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:12.984903   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.922515ms
	I1002 20:39:12.984979   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:12.985054   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:12.985161   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:12.985238   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:39:12.985311   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	I1002 20:39:12.985378   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	I1002 20:39:12.985440   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	I1002 20:39:12.985446   65735 kubeadm.go:318] 
	I1002 20:39:12.985522   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:39:12.985593   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:39:12.985693   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:39:12.985824   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:39:12.985912   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:39:12.986021   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:39:12.986035   65735 kubeadm.go:318] 
	W1002 20:39:12.986169   65735 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-872795 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.922515ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000822404s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001115577s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001271095s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:39:12.986234   65735 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:39:15.725293   65735 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.739035287s)
	I1002 20:39:15.725355   65735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:39:15.737845   65735 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:39:15.737903   65735 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:39:15.745492   65735 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:39:15.745511   65735 kubeadm.go:157] found existing configuration files:
	
	I1002 20:39:15.745550   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:39:15.752939   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:39:15.752992   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:39:15.759738   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:39:15.767128   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:39:15.767188   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:39:15.774064   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.781262   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:39:15.781310   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:39:15.788069   65735 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:39:15.794962   65735 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:39:15.795005   65735 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:39:15.801555   65735 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:39:15.838178   65735 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:39:15.838265   65735 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:39:15.857468   65735 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:39:15.857554   65735 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:39:15.857595   65735 kubeadm.go:318] OS: Linux
	I1002 20:39:15.857662   65735 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:39:15.857732   65735 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:39:15.857790   65735 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:39:15.857850   65735 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:39:15.857910   65735 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:39:15.857968   65735 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:39:15.858025   65735 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:39:15.858074   65735 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:39:15.913423   65735 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:39:15.913519   65735 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:39:15.913695   65735 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:39:15.919381   65735 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:39:15.923412   65735 out.go:252]   - Generating certificates and keys ...
	I1002 20:39:15.923499   65735 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:39:15.923576   65735 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:39:15.923699   65735 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:39:15.923782   65735 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:39:15.923894   65735 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:39:15.923992   65735 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:39:15.924083   65735 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:39:15.924181   65735 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:39:15.924290   65735 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:39:15.924413   65735 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:39:15.924467   65735 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:39:15.924516   65735 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:39:15.972082   65735 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:39:16.453776   65735 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:39:16.604369   65735 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:39:16.663878   65735 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:39:16.897315   65735 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:39:16.897840   65735 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:39:16.900020   65735 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:39:16.904460   65735 out.go:252]   - Booting up control plane ...
	I1002 20:39:16.904580   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:39:16.904708   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:39:16.904777   65735 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:39:16.917381   65735 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:39:16.917507   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:39:16.923733   65735 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:39:16.923961   65735 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:39:16.924014   65735 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:39:17.026795   65735 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:39:17.026994   65735 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:39:18.027761   65735 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001153954s
	I1002 20:39:18.030723   65735 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:39:18.030874   65735 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:39:18.030983   65735 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:39:18.031132   65735 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:43:18.032268   65735 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	I1002 20:43:18.032470   65735 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	I1002 20:43:18.032708   65735 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	I1002 20:43:18.032742   65735 kubeadm.go:318] 
	I1002 20:43:18.032978   65735 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:43:18.033184   65735 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:43:18.033380   65735 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:43:18.033599   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:43:18.033817   65735 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:43:18.034037   65735 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:43:18.034052   65735 kubeadm.go:318] 
	I1002 20:43:18.036299   65735 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:43:18.036439   65735 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:43:18.037026   65735 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:43:18.037095   65735 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:43:18.037166   65735 kubeadm.go:402] duration metric: took 8m10.08637402s to StartCluster
	I1002 20:43:18.037208   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:43:18.037266   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:43:18.063637   65735 cri.go:89] found id: ""
	I1002 20:43:18.063696   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.063705   65735 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:43:18.063713   65735 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:43:18.063764   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:43:18.087963   65735 cri.go:89] found id: ""
	I1002 20:43:18.087984   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.087991   65735 logs.go:284] No container was found matching "etcd"
	I1002 20:43:18.087996   65735 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:43:18.088039   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:43:18.111877   65735 cri.go:89] found id: ""
	I1002 20:43:18.111898   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.111905   65735 logs.go:284] No container was found matching "coredns"
	I1002 20:43:18.111911   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:43:18.111958   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:43:18.134814   65735 cri.go:89] found id: ""
	I1002 20:43:18.134834   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.134841   65735 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:43:18.134847   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:43:18.134887   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:43:18.158528   65735 cri.go:89] found id: ""
	I1002 20:43:18.158551   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.158558   65735 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:43:18.158564   65735 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:43:18.158607   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:43:18.182849   65735 cri.go:89] found id: ""
	I1002 20:43:18.182871   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.182878   65735 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:43:18.182883   65735 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:43:18.182926   65735 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:43:18.207447   65735 cri.go:89] found id: ""
	I1002 20:43:18.207470   65735 logs.go:282] 0 containers: []
	W1002 20:43:18.207481   65735 logs.go:284] No container was found matching "kindnet"
	I1002 20:43:18.207493   65735 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:43:18.207507   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:43:18.263385   65735 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:43:18.256379    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.256918    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258440    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.258875    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:43:18.260324    2546 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:43:18.263409   65735 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:43:18.263421   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:43:18.324936   65735 logs.go:123] Gathering logs for container status ...
	I1002 20:43:18.324969   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:43:18.351509   65735 logs.go:123] Gathering logs for kubelet ...
	I1002 20:43:18.351536   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:43:18.417761   65735 logs.go:123] Gathering logs for dmesg ...
	I1002 20:43:18.417794   65735 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 20:43:18.428567   65735 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:43:18.428641   65735 out.go:285] * 
	W1002 20:43:18.428716   65735 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.428735   65735 out.go:285] * 
	W1002 20:43:18.430415   65735 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:43:18.433894   65735 out.go:203] 
	W1002 20:43:18.435307   65735 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001153954s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000946137s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000986363s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001117623s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:43:18.435342   65735 out.go:285] * 
	I1002 20:43:18.437263   65735 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.80789124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.808364531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.809825138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.812284177Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.829118501Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=de23e18f-3a17-4d7b-a8af-e1a02a4334e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.82971957Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=828cc504-98e9-4c1d-9909-ec787a19e362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.830613121Z" level=info msg="createCtr: deleting container ID e5fe47e5c3d4151793509df4103eb0bca8a823792ec2190e4e4516e131fcd986 from idIndex" id=de23e18f-3a17-4d7b-a8af-e1a02a4334e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.830643871Z" level=info msg="createCtr: removing container e5fe47e5c3d4151793509df4103eb0bca8a823792ec2190e4e4516e131fcd986" id=de23e18f-3a17-4d7b-a8af-e1a02a4334e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.830692992Z" level=info msg="createCtr: deleting container e5fe47e5c3d4151793509df4103eb0bca8a823792ec2190e4e4516e131fcd986 from storage" id=de23e18f-3a17-4d7b-a8af-e1a02a4334e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.831086681Z" level=info msg="createCtr: deleting container ID 57291768b61c47a14662319e07a2d4ed84edb16f185722902230ec989d104f37 from idIndex" id=828cc504-98e9-4c1d-9909-ec787a19e362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.831118356Z" level=info msg="createCtr: removing container 57291768b61c47a14662319e07a2d4ed84edb16f185722902230ec989d104f37" id=828cc504-98e9-4c1d-9909-ec787a19e362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.831146297Z" level=info msg="createCtr: deleting container 57291768b61c47a14662319e07a2d4ed84edb16f185722902230ec989d104f37 from storage" id=828cc504-98e9-4c1d-9909-ec787a19e362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.834059809Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=de23e18f-3a17-4d7b-a8af-e1a02a4334e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:52 ha-872795 crio[775]: time="2025-10-02T20:45:52.834384914Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=828cc504-98e9-4c1d-9909-ec787a19e362 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.801176831Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f9e43b14-f9cb-48fc-ba48-ffb17270d28a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.801937722Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=dff6c370-1413-4ce6-986d-5f797225357d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.802817173Z" level=info msg="Creating container: kube-system/etcd-ha-872795/etcd" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.803050039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.806308374Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.8067161Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.820030337Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.82143539Z" level=info msg="createCtr: deleting container ID eb4972187a1477b5efce4a58370c16c08af83b9a2b142ce896d86d9ad64f4b1a from idIndex" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.821465254Z" level=info msg="createCtr: removing container eb4972187a1477b5efce4a58370c16c08af83b9a2b142ce896d86d9ad64f4b1a" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.82149363Z" level=info msg="createCtr: deleting container eb4972187a1477b5efce4a58370c16c08af83b9a2b142ce896d86d9ad64f4b1a from storage" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:45:54 ha-872795 crio[775]: time="2025-10-02T20:45:54.823812358Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=6c0a5abf-f276-4465-941d-a70715ea3d44 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:45:57.615745    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:57.616214    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:57.617857    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:57.618283    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:45:57.619740    4784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:45:57 up  1:28,  0 user,  load average: 0.46, 0.43, 0.25
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:45:52 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:52 ha-872795 kubelet[1957]:  > podSandboxID="ef6e539a74c14f8dbdf14c5253393747982dcd66fa486356dbfe446d81891de8"
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.834487    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:52 ha-872795 kubelet[1957]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:52 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.834529    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.834583    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:45:52 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:52 ha-872795 kubelet[1957]:  > podSandboxID="079790ff593aadbe100b150a42b87bda86092d1fcff86f8774566f658d455d0a"
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.834682    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:52 ha-872795 kubelet[1957]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:52 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:52 ha-872795 kubelet[1957]: E1002 20:45:52.835860    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:45:54 ha-872795 kubelet[1957]: E1002 20:45:54.800766    1957 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:45:54 ha-872795 kubelet[1957]: E1002 20:45:54.824105    1957 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:45:54 ha-872795 kubelet[1957]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:54 ha-872795 kubelet[1957]:  > podSandboxID="a3772fd373a48cf1fd0c1e339ac227ef43a36813ee71e966f4b33f85ee2aa32d"
	Oct 02 20:45:54 ha-872795 kubelet[1957]: E1002 20:45:54.824194    1957 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:45:54 ha-872795 kubelet[1957]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:45:54 ha-872795 kubelet[1957]:  > logger="UnhandledError"
	Oct 02 20:45:54 ha-872795 kubelet[1957]: E1002 20:45:54.824240    1957 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	Oct 02 20:45:55 ha-872795 kubelet[1957]: E1002 20:45:55.448926    1957 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:45:55 ha-872795 kubelet[1957]: I1002 20:45:55.620221    1957 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:45:55 ha-872795 kubelet[1957]: E1002 20:45:55.620673    1957 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:45:56 ha-872795 kubelet[1957]: E1002 20:45:56.716917    1957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7230cb11fa8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,LastTimestamp:2025-10-02 20:39:17.792317352 +0000 UTC m=+0.765279708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 6 (283.572543ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:45:57.971175   78556 status.go:458] kubeconfig endpoint: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-872795 stop --alsologtostderr -v 5: (1.203341819s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 start --wait true --alsologtostderr -v 5
E1002 20:50:54.123914   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 start --wait true --alsologtostderr -v 5: exit status 80 (6m7.178769317s)

                                                
                                                
-- stdout --
	* [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:45:59.273214   78899 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:59.273492   78899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:59.273503   78899 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:59.273509   78899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:59.273755   78899 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:59.274203   78899 out.go:368] Setting JSON to false
	I1002 20:45:59.275100   78899 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5308,"bootTime":1759432651,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:45:59.275172   78899 start.go:140] virtualization: kvm guest
	I1002 20:45:59.277322   78899 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:45:59.278717   78899 notify.go:221] Checking for updates...
	I1002 20:45:59.278734   78899 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:45:59.280224   78899 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:45:59.281523   78899 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:59.282829   78899 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:45:59.283968   78899 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:45:59.285159   78899 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:45:59.286946   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:59.287045   78899 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:45:59.312895   78899 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:45:59.312963   78899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:59.365695   78899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:45:59.355393625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:45:59.365846   78899 docker.go:319] overlay module found
	I1002 20:45:59.367547   78899 out.go:179] * Using the docker driver based on existing profile
	I1002 20:45:59.368669   78899 start.go:306] selected driver: docker
	I1002 20:45:59.368691   78899 start.go:936] validating driver "docker" against &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:45:59.368764   78899 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:45:59.368835   78899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:59.420192   78899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:45:59.410429763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:45:59.420918   78899 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:45:59.420950   78899 cni.go:84] Creating CNI manager for ""
	I1002 20:45:59.420996   78899 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:45:59.421049   78899 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 20:45:59.422984   78899 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:45:59.424152   78899 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:45:59.425341   78899 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:45:59.426550   78899 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:45:59.426588   78899 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:45:59.426598   78899 cache.go:59] Caching tarball of preloaded images
	I1002 20:45:59.426671   78899 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:45:59.426737   78899 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:45:59.426752   78899 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:45:59.426839   78899 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:45:59.445659   78899 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:45:59.445684   78899 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:45:59.445705   78899 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:45:59.445727   78899 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:45:59.445817   78899 start.go:365] duration metric: took 46.032µs to acquireMachinesLock for "ha-872795"
	I1002 20:45:59.445849   78899 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:45:59.445859   78899 fix.go:55] fixHost starting: 
	I1002 20:45:59.446055   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:59.462065   78899 fix.go:113] recreateIfNeeded on ha-872795: state=Stopped err=<nil>
	W1002 20:45:59.462095   78899 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:45:59.463993   78899 out.go:252] * Restarting existing docker container for "ha-872795" ...
	I1002 20:45:59.464064   78899 cli_runner.go:164] Run: docker start ha-872795
	I1002 20:45:59.685107   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:59.703014   78899 kic.go:430] container "ha-872795" state is running.
	I1002 20:45:59.703476   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:59.721917   78899 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:45:59.722128   78899 machine.go:93] provisionDockerMachine start ...
	I1002 20:45:59.722199   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:59.740462   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:45:59.740703   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:45:59.740719   78899 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:45:59.741377   78899 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45756->127.0.0.1:32788: read: connection reset by peer
	I1002 20:46:02.885620   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:46:02.885643   78899 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:46:02.885724   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:02.903157   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:02.903362   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:02.903374   78899 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:46:03.053956   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:46:03.054038   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.071746   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:03.071971   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:03.071994   78899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:46:03.214048   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:46:03.214082   78899 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:46:03.214121   78899 ubuntu.go:190] setting up certificates
	I1002 20:46:03.214132   78899 provision.go:84] configureAuth start
	I1002 20:46:03.214197   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:46:03.231298   78899 provision.go:143] copyHostCerts
	I1002 20:46:03.231330   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:46:03.231366   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:46:03.231391   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:46:03.231472   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:46:03.231573   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:46:03.231600   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:46:03.231610   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:46:03.231673   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:46:03.231747   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:46:03.231769   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:46:03.231778   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:46:03.231823   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:46:03.231892   78899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:46:03.490166   78899 provision.go:177] copyRemoteCerts
	I1002 20:46:03.490221   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:46:03.490259   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.508435   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:03.609601   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:46:03.609667   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:46:03.626240   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:46:03.626304   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 20:46:03.642410   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:46:03.642458   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:46:03.658782   78899 provision.go:87] duration metric: took 444.634386ms to configureAuth
	I1002 20:46:03.658808   78899 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:46:03.658975   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:03.659073   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.676668   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:03.676868   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:03.676886   78899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:46:03.930147   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:46:03.930169   78899 machine.go:96] duration metric: took 4.208026772s to provisionDockerMachine
	I1002 20:46:03.930182   78899 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:46:03.930195   78899 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:46:03.930249   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:46:03.930307   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.947258   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.047956   78899 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:46:04.051422   78899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:46:04.051453   78899 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:46:04.051465   78899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:46:04.051521   78899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:46:04.051595   78899 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:46:04.051605   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:46:04.051733   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:46:04.059188   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:46:04.075417   78899 start.go:297] duration metric: took 145.220836ms for postStartSetup
	I1002 20:46:04.075487   78899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:46:04.075532   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.093129   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.191077   78899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:46:04.195739   78899 fix.go:57] duration metric: took 4.749874368s for fixHost
	I1002 20:46:04.195760   78899 start.go:84] releasing machines lock for "ha-872795", held for 4.749931233s
	I1002 20:46:04.195825   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:46:04.212606   78899 ssh_runner.go:195] Run: cat /version.json
	I1002 20:46:04.212673   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.212711   78899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:46:04.212768   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.230369   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.230715   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.379868   78899 ssh_runner.go:195] Run: systemctl --version
	I1002 20:46:04.386052   78899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:46:04.419376   78899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:46:04.424169   78899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:46:04.424233   78899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:46:04.431914   78899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:46:04.431932   78899 start.go:496] detecting cgroup driver to use...
	I1002 20:46:04.431960   78899 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:46:04.432004   78899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:46:04.445356   78899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:46:04.456824   78899 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:46:04.456874   78899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:46:04.470403   78899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:46:04.481638   78899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:46:04.557990   78899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:46:04.636555   78899 docker.go:234] disabling docker service ...
	I1002 20:46:04.636608   78899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:46:04.650153   78899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:46:04.662016   78899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:46:04.734613   78899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:46:04.811825   78899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:46:04.823641   78899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:46:04.837220   78899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:46:04.837279   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.845762   78899 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:46:04.845809   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.854146   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.862344   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.870401   78899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:46:04.878640   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.886882   78899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.894503   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.902512   78899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:46:04.909191   78899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:46:04.915764   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:04.993486   78899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:46:05.096845   78899 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:46:05.096913   78899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:46:05.100739   78899 start.go:564] Will wait 60s for crictl version
	I1002 20:46:05.100794   78899 ssh_runner.go:195] Run: which crictl
	I1002 20:46:05.104308   78899 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:46:05.127966   78899 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:46:05.128043   78899 ssh_runner.go:195] Run: crio --version
	I1002 20:46:05.154454   78899 ssh_runner.go:195] Run: crio --version
	I1002 20:46:05.182372   78899 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:46:05.183558   78899 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:46:05.200765   78899 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:46:05.204765   78899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:46:05.214588   78899 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:46:05.214721   78899 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:46:05.214780   78899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:46:05.245534   78899 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:46:05.245552   78899 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:46:05.245593   78899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:46:05.270550   78899 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:46:05.270570   78899 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:46:05.270577   78899 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:46:05.270681   78899 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:46:05.270753   78899 ssh_runner.go:195] Run: crio config
	I1002 20:46:05.313363   78899 cni.go:84] Creating CNI manager for ""
	I1002 20:46:05.313383   78899 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:46:05.313397   78899 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:46:05.313416   78899 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:46:05.313519   78899 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:46:05.313572   78899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:46:05.321352   78899 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:46:05.321406   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:46:05.328622   78899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:46:05.340520   78899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:46:05.352503   78899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:46:05.364256   78899 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:46:05.367691   78899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:46:05.376985   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:05.453441   78899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:05.477718   78899 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:46:05.477741   78899 certs.go:195] generating shared ca certs ...
	I1002 20:46:05.477762   78899 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.477898   78899 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:46:05.477934   78899 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:46:05.477943   78899 certs.go:257] generating profile certs ...
	I1002 20:46:05.478028   78899 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:46:05.478050   78899 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4
	I1002 20:46:05.478067   78899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:46:05.639131   78899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 ...
	I1002 20:46:05.639158   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4: {Name:mkfc40b7884f53bead483594047f8801d6c65008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.639360   78899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4 ...
	I1002 20:46:05.639377   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4: {Name:mkbc72faf4d67a50affdab4239091d17eab3b576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.639481   78899 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:46:05.639675   78899 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:46:05.639868   78899 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:46:05.639889   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:46:05.639909   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:46:05.639931   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:46:05.639955   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:46:05.639971   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:46:05.639988   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:46:05.640006   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:46:05.640023   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:46:05.640085   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:46:05.640129   78899 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:46:05.640142   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:46:05.640172   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:46:05.640204   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:46:05.640245   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:46:05.640297   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:46:05.640338   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.640356   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.640374   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.640909   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:46:05.658131   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:46:05.675243   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:46:05.691273   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:46:05.707405   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 20:46:05.723209   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:46:05.739016   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:46:05.755859   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:46:05.771787   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:46:05.788286   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:46:05.804179   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:46:05.820035   78899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:46:05.831579   78899 ssh_runner.go:195] Run: openssl version
	I1002 20:46:05.837409   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:46:05.845365   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.848827   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.848873   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.882807   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:46:05.890823   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:46:05.899057   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.902614   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.902684   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.935800   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:46:05.943470   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:46:05.951918   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.955342   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.955394   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.997712   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:46:06.007162   78899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:46:06.010895   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:46:06.051524   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:46:06.085577   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:46:06.119097   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:46:06.153217   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:46:06.186423   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:46:06.220190   78899 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:46:06.220256   78899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:46:06.220304   78899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:46:06.246811   78899 cri.go:89] found id: ""
	I1002 20:46:06.246881   78899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:46:06.254366   78899 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:46:06.254384   78899 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:46:06.254422   78899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:46:06.262225   78899 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:46:06.262586   78899 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:46:06.262726   78899 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "ha-872795" cluster setting kubeconfig missing "ha-872795" context setting]
	I1002 20:46:06.263079   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.263592   78899 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:46:06.264072   78899 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:46:06.264087   78899 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:46:06.264091   78899 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:46:06.264094   78899 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:46:06.264100   78899 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:46:06.264140   78899 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:46:06.264397   78899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:46:06.271713   78899 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:46:06.271741   78899 kubeadm.go:601] duration metric: took 17.352317ms to restartPrimaryControlPlane
	I1002 20:46:06.271749   78899 kubeadm.go:402] duration metric: took 51.569514ms to StartCluster
	I1002 20:46:06.271767   78899 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.271822   78899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:46:06.272244   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.272428   78899 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:46:06.272502   78899 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:46:06.272602   78899 addons.go:69] Setting storage-provisioner=true in profile "ha-872795"
	I1002 20:46:06.272624   78899 addons.go:238] Setting addon storage-provisioner=true in "ha-872795"
	I1002 20:46:06.272679   78899 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:46:06.272682   78899 addons.go:69] Setting default-storageclass=true in profile "ha-872795"
	I1002 20:46:06.272710   78899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-872795"
	I1002 20:46:06.272711   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:06.273007   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.273074   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.275849   78899 out.go:179] * Verifying Kubernetes components...
	I1002 20:46:06.277085   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:06.293700   78899 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:46:06.294029   78899 addons.go:238] Setting addon default-storageclass=true in "ha-872795"
	I1002 20:46:06.294066   78899 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:46:06.294514   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.296310   78899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:46:06.297578   78899 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:06.297591   78899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:46:06.297638   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:06.315893   78899 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:06.315919   78899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:46:06.315977   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:06.322842   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:06.338186   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:06.386402   78899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:06.398777   78899 node_ready.go:35] waiting up to 6m0s for node "ha-872795" to be "Ready" ...
	I1002 20:46:06.430020   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:06.444406   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:06.481919   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.481958   78899 retry.go:31] will retry after 309.807231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:06.497604   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.497632   78899 retry.go:31] will retry after 244.884641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.743097   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:06.792601   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:06.794112   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.794136   78899 retry.go:31] will retry after 533.883087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:06.842590   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.842621   78899 retry.go:31] will retry after 410.666568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.253624   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:07.305432   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.305458   78899 retry.go:31] will retry after 489.641892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.328610   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:07.380758   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.380795   78899 retry.go:31] will retry after 369.153465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.750784   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:07.795231   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:07.802320   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.802354   78899 retry.go:31] will retry after 900.902263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:07.846519   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.846552   78899 retry.go:31] will retry after 825.480637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:08.400289   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:08.672691   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:08.704184   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:08.723919   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:08.723951   78899 retry.go:31] will retry after 1.623242145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:08.754902   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:08.754942   78899 retry.go:31] will retry after 1.534997391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.290627   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:10.340323   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.340352   78899 retry.go:31] will retry after 1.072500895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.347501   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:10.397032   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.397058   78899 retry.go:31] will retry after 2.562445815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:10.899389   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:11.413692   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:11.465155   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:11.465197   78899 retry.go:31] will retry after 2.545749407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:12.900153   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:12.960290   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:13.011206   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:13.011233   78899 retry.go:31] will retry after 2.264218786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:14.011720   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:14.064198   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:14.064229   78899 retry.go:31] will retry after 5.430080707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:14.900209   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:15.275689   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:15.325885   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:15.325919   78899 retry.go:31] will retry after 5.718863405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:17.399470   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:19.399809   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:19.495047   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:19.546169   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:19.546212   78899 retry.go:31] will retry after 6.349030782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:21.045488   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:21.095479   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:21.095509   78899 retry.go:31] will retry after 4.412738231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:21.400327   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:23.899614   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:25.508453   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:25.560861   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.560888   78899 retry.go:31] will retry after 8.695034149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.896450   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:25.947240   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.947277   78899 retry.go:31] will retry after 14.217722553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:26.399408   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:28.400215   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:30.900092   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:33.400118   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:34.256598   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:34.309569   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:34.309630   78899 retry.go:31] will retry after 19.451161912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:35.899352   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:37.899781   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:40.165763   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:40.217670   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:40.217704   78899 retry.go:31] will retry after 8.892100881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:40.399315   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:42.399564   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:44.399980   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:46.899303   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:48.899477   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:49.110844   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:49.161863   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:49.161890   78899 retry.go:31] will retry after 18.08446926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:51.399432   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:53.399603   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:53.761037   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:53.812412   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:53.812439   78899 retry.go:31] will retry after 25.479513407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:55.899367   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:57.899529   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:00.399361   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:02.399525   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:04.899405   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:07.246772   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:07.299553   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:07.299581   78899 retry.go:31] will retry after 17.600869808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:07.400189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:09.899776   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:11.900115   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:14.399576   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:16.399745   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:18.400228   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:19.293047   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:47:19.344597   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:19.344630   78899 retry.go:31] will retry after 39.025323659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:20.899449   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:22.899645   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:24.900270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:24.901299   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:24.952227   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:24.952258   78899 retry.go:31] will retry after 34.385430665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:27.400196   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:29.899432   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:31.900050   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:34.400107   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:36.900095   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:39.400043   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:41.400189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:43.900178   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:46.399320   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:48.400172   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:50.900066   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:53.399988   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:55.400092   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:57.400223   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:58.370762   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:47:58.424525   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:58.424640   78899 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:47:59.338316   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:59.390212   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:59.390328   78899 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:47:59.392488   78899 out.go:179] * Enabled addons: 
	I1002 20:47:59.394002   78899 addons.go:514] duration metric: took 1m53.121505654s for enable addons: enabled=[]
	W1002 20:47:59.899586   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:02.400313   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:04.899323   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:06.900226   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:09.400085   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:11.900023   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:14.400129   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:16.400283   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:18.900265   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:21.400147   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:23.900093   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:26.399282   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:28.400196   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:30.400232   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:32.900280   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:35.399278   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:37.400223   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:39.400256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:41.900166   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:44.400175   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:46.400211   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:48.900081   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:50.900220   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:53.400165   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:55.900072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:58.400079   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:00.900059   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:03.400072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:05.400105   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:07.400152   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:09.400199   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:11.900093   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:13.900189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:16.399380   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:18.400192   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:20.900188   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:23.400053   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:25.400271   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:27.900209   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:29.900422   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:32.400234   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:34.900235   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:37.400208   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:39.900090   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:42.400075   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:44.900256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:47.400250   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:49.400299   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:51.900262   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:54.399328   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:56.399370   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:58.400237   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:00.900256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:03.400303   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:05.900106   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:08.399977   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:10.400162   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:12.900033   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:14.900100   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:16.900212   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:19.400248   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:21.900062   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:24.400122   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:26.400193   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:28.900088   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:30.900230   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:33.400099   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:35.900072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:37.900203   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:39.900270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:42.400176   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:44.400277   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:46.900306   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:49.399270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:51.400304   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:53.900287   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:56.400212   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:58.900133   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:00.900311   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:03.400234   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:05.900198   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:08.400064   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:10.900046   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:13.399369   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:15.400276   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:17.900327   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:20.400235   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:22.900085   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:24.900199   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:27.400267   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:29.899494   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:31.900351   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:34.399288   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:36.399420   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:38.400249   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:40.900211   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:43.400231   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:45.400275   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:47.900301   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:50.400321   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:52.900338   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:55.400304   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:57.899398   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:00.400254   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:02.400323   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:04.899587   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:06.399229   78899 node_ready.go:38] duration metric: took 6m0.000412603s for node "ha-872795" to be "Ready" ...
	I1002 20:52:06.401641   78899 out.go:203] 
	W1002 20:52:06.403684   78899 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:52:06.403700   78899 out.go:285] * 
	* 
	W1002 20:52:06.405327   78899 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:52:06.406669   78899 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-872795 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 79098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:45:59.488616861Z",
	            "FinishedAt": "2025-10-02T20:45:58.36276989Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe270a11d29905a7aa21ceba3c673cad94096380a45185c01c96de7e6b75dbe7",
	            "SandboxKey": "/var/run/docker/netns/fe270a11d299",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:9d:1e:6a:fe:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "38ef7297bec941b34747e498d90575ca5f4bb864e58670d3487fa859f3f506b4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 2 (289.847813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                      ARGS                                                       │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ ha-872795 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:34 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                                                          │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                                       │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                                                                  │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node start m02 --alsologtostderr -v 5                                                                 │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                                                      │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                                                           │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │ 02 Oct 25 20:45 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5                                                              │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                                                      │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:45:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:45:59.273214   78899 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:59.273492   78899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:59.273503   78899 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:59.273509   78899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:59.273755   78899 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:59.274203   78899 out.go:368] Setting JSON to false
	I1002 20:45:59.275100   78899 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5308,"bootTime":1759432651,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:45:59.275172   78899 start.go:140] virtualization: kvm guest
	I1002 20:45:59.277322   78899 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:45:59.278717   78899 notify.go:221] Checking for updates...
	I1002 20:45:59.278734   78899 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:45:59.280224   78899 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:45:59.281523   78899 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:59.282829   78899 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:45:59.283968   78899 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:45:59.285159   78899 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:45:59.286946   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:59.287045   78899 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:45:59.312895   78899 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:45:59.312963   78899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:59.365695   78899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:45:59.355393625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:45:59.365846   78899 docker.go:319] overlay module found
	I1002 20:45:59.367547   78899 out.go:179] * Using the docker driver based on existing profile
	I1002 20:45:59.368669   78899 start.go:306] selected driver: docker
	I1002 20:45:59.368691   78899 start.go:936] validating driver "docker" against &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:45:59.368764   78899 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:45:59.368835   78899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:59.420192   78899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:45:59.410429763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:45:59.420918   78899 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:45:59.420950   78899 cni.go:84] Creating CNI manager for ""
	I1002 20:45:59.420996   78899 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:45:59.421049   78899 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 20:45:59.422984   78899 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:45:59.424152   78899 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:45:59.425341   78899 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:45:59.426550   78899 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:45:59.426588   78899 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:45:59.426598   78899 cache.go:59] Caching tarball of preloaded images
	I1002 20:45:59.426671   78899 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:45:59.426737   78899 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:45:59.426752   78899 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:45:59.426839   78899 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:45:59.445659   78899 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:45:59.445684   78899 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:45:59.445705   78899 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:45:59.445727   78899 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:45:59.445817   78899 start.go:365] duration metric: took 46.032µs to acquireMachinesLock for "ha-872795"
	I1002 20:45:59.445849   78899 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:45:59.445859   78899 fix.go:55] fixHost starting: 
	I1002 20:45:59.446055   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:59.462065   78899 fix.go:113] recreateIfNeeded on ha-872795: state=Stopped err=<nil>
	W1002 20:45:59.462095   78899 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:45:59.463993   78899 out.go:252] * Restarting existing docker container for "ha-872795" ...
	I1002 20:45:59.464064   78899 cli_runner.go:164] Run: docker start ha-872795
	I1002 20:45:59.685107   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:59.703014   78899 kic.go:430] container "ha-872795" state is running.
	I1002 20:45:59.703476   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:59.721917   78899 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:45:59.722128   78899 machine.go:93] provisionDockerMachine start ...
	I1002 20:45:59.722199   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:59.740462   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:45:59.740703   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:45:59.740719   78899 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:45:59.741377   78899 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45756->127.0.0.1:32788: read: connection reset by peer
	I1002 20:46:02.885620   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:46:02.885643   78899 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:46:02.885724   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:02.903157   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:02.903362   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:02.903374   78899 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:46:03.053956   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:46:03.054038   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.071746   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:03.071971   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:03.071994   78899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:46:03.214048   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:46:03.214082   78899 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:46:03.214121   78899 ubuntu.go:190] setting up certificates
	I1002 20:46:03.214132   78899 provision.go:84] configureAuth start
	I1002 20:46:03.214197   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:46:03.231298   78899 provision.go:143] copyHostCerts
	I1002 20:46:03.231330   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:46:03.231366   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:46:03.231391   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:46:03.231472   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:46:03.231573   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:46:03.231600   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:46:03.231610   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:46:03.231673   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:46:03.231747   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:46:03.231769   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:46:03.231778   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:46:03.231823   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:46:03.231892   78899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:46:03.490166   78899 provision.go:177] copyRemoteCerts
	I1002 20:46:03.490221   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:46:03.490259   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.508435   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:03.609601   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:46:03.609667   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:46:03.626240   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:46:03.626304   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 20:46:03.642410   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:46:03.642458   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:46:03.658782   78899 provision.go:87] duration metric: took 444.634386ms to configureAuth
	I1002 20:46:03.658808   78899 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:46:03.658975   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:03.659073   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.676668   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:03.676868   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:03.676886   78899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:46:03.930147   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:46:03.930169   78899 machine.go:96] duration metric: took 4.208026772s to provisionDockerMachine
	I1002 20:46:03.930182   78899 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:46:03.930195   78899 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:46:03.930249   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:46:03.930307   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.947258   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.047956   78899 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:46:04.051422   78899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:46:04.051453   78899 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:46:04.051465   78899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:46:04.051521   78899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:46:04.051595   78899 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:46:04.051605   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:46:04.051733   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:46:04.059188   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:46:04.075417   78899 start.go:297] duration metric: took 145.220836ms for postStartSetup
	I1002 20:46:04.075487   78899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:46:04.075532   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.093129   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.191077   78899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:46:04.195739   78899 fix.go:57] duration metric: took 4.749874368s for fixHost
	I1002 20:46:04.195760   78899 start.go:84] releasing machines lock for "ha-872795", held for 4.749931233s
	I1002 20:46:04.195825   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:46:04.212606   78899 ssh_runner.go:195] Run: cat /version.json
	I1002 20:46:04.212673   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.212711   78899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:46:04.212768   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.230369   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.230715   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.379868   78899 ssh_runner.go:195] Run: systemctl --version
	I1002 20:46:04.386052   78899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:46:04.419376   78899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:46:04.424169   78899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:46:04.424233   78899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:46:04.431914   78899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:46:04.431932   78899 start.go:496] detecting cgroup driver to use...
	I1002 20:46:04.431960   78899 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:46:04.432004   78899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:46:04.445356   78899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:46:04.456824   78899 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:46:04.456874   78899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:46:04.470403   78899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:46:04.481638   78899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:46:04.557990   78899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:46:04.636555   78899 docker.go:234] disabling docker service ...
	I1002 20:46:04.636608   78899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:46:04.650153   78899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:46:04.662016   78899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:46:04.734613   78899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:46:04.811825   78899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:46:04.823641   78899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:46:04.837220   78899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:46:04.837279   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.845762   78899 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:46:04.845809   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.854146   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.862344   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.870401   78899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:46:04.878640   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.886882   78899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.894503   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.902512   78899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:46:04.909191   78899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:46:04.915764   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:04.993486   78899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:46:05.096845   78899 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:46:05.096913   78899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:46:05.100739   78899 start.go:564] Will wait 60s for crictl version
	I1002 20:46:05.100794   78899 ssh_runner.go:195] Run: which crictl
	I1002 20:46:05.104308   78899 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:46:05.127966   78899 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:46:05.128043   78899 ssh_runner.go:195] Run: crio --version
	I1002 20:46:05.154454   78899 ssh_runner.go:195] Run: crio --version
	I1002 20:46:05.182372   78899 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:46:05.183558   78899 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:46:05.200765   78899 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:46:05.204765   78899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:46:05.214588   78899 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:46:05.214721   78899 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:46:05.214780   78899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:46:05.245534   78899 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:46:05.245552   78899 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:46:05.245593   78899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:46:05.270550   78899 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:46:05.270570   78899 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:46:05.270577   78899 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:46:05.270681   78899 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:46:05.270753   78899 ssh_runner.go:195] Run: crio config
	I1002 20:46:05.313363   78899 cni.go:84] Creating CNI manager for ""
	I1002 20:46:05.313383   78899 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:46:05.313397   78899 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:46:05.313416   78899 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:46:05.313519   78899 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:46:05.313572   78899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:46:05.321352   78899 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:46:05.321406   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:46:05.328622   78899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:46:05.340520   78899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:46:05.352503   78899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:46:05.364256   78899 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:46:05.367691   78899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:46:05.376985   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:05.453441   78899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:05.477718   78899 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:46:05.477741   78899 certs.go:195] generating shared ca certs ...
	I1002 20:46:05.477762   78899 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.477898   78899 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:46:05.477934   78899 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:46:05.477943   78899 certs.go:257] generating profile certs ...
	I1002 20:46:05.478028   78899 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:46:05.478050   78899 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4
	I1002 20:46:05.478067   78899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:46:05.639131   78899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 ...
	I1002 20:46:05.639158   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4: {Name:mkfc40b7884f53bead483594047f8801d6c65008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.639360   78899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4 ...
	I1002 20:46:05.639377   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4: {Name:mkbc72faf4d67a50affdab4239091d17eab3b576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.639481   78899 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:46:05.639675   78899 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:46:05.639868   78899 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:46:05.639889   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:46:05.639909   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:46:05.639931   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:46:05.639955   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:46:05.639971   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:46:05.639988   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:46:05.640006   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:46:05.640023   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:46:05.640085   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:46:05.640129   78899 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:46:05.640142   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:46:05.640172   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:46:05.640204   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:46:05.640245   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:46:05.640297   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:46:05.640338   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.640356   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.640374   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.640909   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:46:05.658131   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:46:05.675243   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:46:05.691273   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:46:05.707405   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 20:46:05.723209   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:46:05.739016   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:46:05.755859   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:46:05.771787   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:46:05.788286   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:46:05.804179   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:46:05.820035   78899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:46:05.831579   78899 ssh_runner.go:195] Run: openssl version
	I1002 20:46:05.837409   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:46:05.845365   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.848827   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.848873   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.882807   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:46:05.890823   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:46:05.899057   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.902614   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.902684   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.935800   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:46:05.943470   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:46:05.951918   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.955342   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.955394   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.997712   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:46:06.007162   78899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:46:06.010895   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:46:06.051524   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:46:06.085577   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:46:06.119097   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:46:06.153217   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:46:06.186423   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:46:06.220190   78899 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:46:06.220256   78899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:46:06.220304   78899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:46:06.246811   78899 cri.go:89] found id: ""
	I1002 20:46:06.246881   78899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:46:06.254366   78899 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:46:06.254384   78899 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:46:06.254422   78899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:46:06.262225   78899 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:46:06.262586   78899 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:46:06.262726   78899 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "ha-872795" cluster setting kubeconfig missing "ha-872795" context setting]
	I1002 20:46:06.263079   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.263592   78899 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:46:06.264072   78899 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:46:06.264087   78899 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:46:06.264091   78899 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:46:06.264094   78899 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:46:06.264100   78899 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:46:06.264140   78899 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:46:06.264397   78899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:46:06.271713   78899 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:46:06.271741   78899 kubeadm.go:601] duration metric: took 17.352317ms to restartPrimaryControlPlane
	I1002 20:46:06.271749   78899 kubeadm.go:402] duration metric: took 51.569514ms to StartCluster
	I1002 20:46:06.271767   78899 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.271822   78899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:46:06.272244   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.272428   78899 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:46:06.272502   78899 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:46:06.272602   78899 addons.go:69] Setting storage-provisioner=true in profile "ha-872795"
	I1002 20:46:06.272624   78899 addons.go:238] Setting addon storage-provisioner=true in "ha-872795"
	I1002 20:46:06.272679   78899 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:46:06.272682   78899 addons.go:69] Setting default-storageclass=true in profile "ha-872795"
	I1002 20:46:06.272710   78899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-872795"
	I1002 20:46:06.272711   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:06.273007   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.273074   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.275849   78899 out.go:179] * Verifying Kubernetes components...
	I1002 20:46:06.277085   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:06.293700   78899 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:46:06.294029   78899 addons.go:238] Setting addon default-storageclass=true in "ha-872795"
	I1002 20:46:06.294066   78899 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:46:06.294514   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.296310   78899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:46:06.297578   78899 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:06.297591   78899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:46:06.297638   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:06.315893   78899 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:06.315919   78899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:46:06.315977   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:06.322842   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:06.338186   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:06.386402   78899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:06.398777   78899 node_ready.go:35] waiting up to 6m0s for node "ha-872795" to be "Ready" ...
	I1002 20:46:06.430020   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:06.444406   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:06.481919   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.481958   78899 retry.go:31] will retry after 309.807231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:06.497604   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.497632   78899 retry.go:31] will retry after 244.884641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.743097   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:06.792601   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:06.794112   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.794136   78899 retry.go:31] will retry after 533.883087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:06.842590   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.842621   78899 retry.go:31] will retry after 410.666568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.253624   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:07.305432   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.305458   78899 retry.go:31] will retry after 489.641892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.328610   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:07.380758   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.380795   78899 retry.go:31] will retry after 369.153465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.750784   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:07.795231   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:07.802320   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.802354   78899 retry.go:31] will retry after 900.902263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:07.846519   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.846552   78899 retry.go:31] will retry after 825.480637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:08.400289   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:08.672691   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:08.704184   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:08.723919   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:08.723951   78899 retry.go:31] will retry after 1.623242145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:08.754902   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:08.754942   78899 retry.go:31] will retry after 1.534997391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.290627   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:10.340323   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.340352   78899 retry.go:31] will retry after 1.072500895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.347501   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:10.397032   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.397058   78899 retry.go:31] will retry after 2.562445815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:10.899389   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:11.413692   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:11.465155   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:11.465197   78899 retry.go:31] will retry after 2.545749407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:12.900153   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:12.960290   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:13.011206   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:13.011233   78899 retry.go:31] will retry after 2.264218786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:14.011720   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:14.064198   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:14.064229   78899 retry.go:31] will retry after 5.430080707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:14.900209   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:15.275689   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:15.325885   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:15.325919   78899 retry.go:31] will retry after 5.718863405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:17.399470   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:19.399809   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:19.495047   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:19.546169   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:19.546212   78899 retry.go:31] will retry after 6.349030782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:21.045488   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:21.095479   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:21.095509   78899 retry.go:31] will retry after 4.412738231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:21.400327   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:23.899614   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:25.508453   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:25.560861   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.560888   78899 retry.go:31] will retry after 8.695034149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.896450   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:25.947240   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.947277   78899 retry.go:31] will retry after 14.217722553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:26.399408   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:28.400215   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:30.900092   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:33.400118   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:34.256598   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:34.309569   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:34.309630   78899 retry.go:31] will retry after 19.451161912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:35.899352   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:37.899781   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:40.165763   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:40.217670   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:40.217704   78899 retry.go:31] will retry after 8.892100881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:40.399315   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:42.399564   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:44.399980   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:46.899303   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:48.899477   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:49.110844   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:49.161863   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:49.161890   78899 retry.go:31] will retry after 18.08446926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:51.399432   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:53.399603   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:53.761037   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:53.812412   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:53.812439   78899 retry.go:31] will retry after 25.479513407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:55.899367   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:57.899529   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:00.399361   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:02.399525   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:04.899405   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:07.246772   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:07.299553   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:07.299581   78899 retry.go:31] will retry after 17.600869808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:07.400189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:09.899776   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:11.900115   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:14.399576   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:16.399745   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:18.400228   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:19.293047   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:47:19.344597   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:19.344630   78899 retry.go:31] will retry after 39.025323659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:20.899449   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:22.899645   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:24.900270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:24.901299   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:24.952227   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:24.952258   78899 retry.go:31] will retry after 34.385430665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:27.400196   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:29.899432   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:31.900050   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:34.400107   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:36.900095   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:39.400043   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:41.400189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:43.900178   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:46.399320   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:48.400172   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:50.900066   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:53.399988   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:55.400092   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:57.400223   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:58.370762   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:47:58.424525   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:58.424640   78899 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:47:59.338316   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:59.390212   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:59.390328   78899 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:47:59.392488   78899 out.go:179] * Enabled addons: 
	I1002 20:47:59.394002   78899 addons.go:514] duration metric: took 1m53.121505654s for enable addons: enabled=[]
	W1002 20:47:59.899586   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:02.400313   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:04.899323   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:06.900226   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:09.400085   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:11.900023   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:14.400129   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:16.400283   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:18.900265   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:21.400147   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:23.900093   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:26.399282   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:28.400196   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:30.400232   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:32.900280   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:35.399278   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:37.400223   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:39.400256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:41.900166   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:44.400175   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:46.400211   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:48.900081   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:50.900220   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:53.400165   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:55.900072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:58.400079   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:00.900059   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:03.400072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:05.400105   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:07.400152   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:09.400199   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:11.900093   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:13.900189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:16.399380   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:18.400192   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:20.900188   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:23.400053   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:25.400271   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:27.900209   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:29.900422   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:32.400234   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:34.900235   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:37.400208   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:39.900090   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:42.400075   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:44.900256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:47.400250   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:49.400299   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:51.900262   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:54.399328   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:56.399370   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:58.400237   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:00.900256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:03.400303   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:05.900106   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:08.399977   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:10.400162   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:12.900033   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:14.900100   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:16.900212   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:19.400248   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:21.900062   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:24.400122   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:26.400193   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:28.900088   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:30.900230   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:33.400099   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:35.900072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:37.900203   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:39.900270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:42.400176   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:44.400277   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:46.900306   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:49.399270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:51.400304   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:53.900287   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:56.400212   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:58.900133   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:00.900311   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:03.400234   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:05.900198   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:08.400064   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:10.900046   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:13.399369   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:15.400276   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:17.900327   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:20.400235   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:22.900085   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:24.900199   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:27.400267   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:29.899494   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:31.900351   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:34.399288   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:36.399420   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:38.400249   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:40.900211   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:43.400231   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:45.400275   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:47.900301   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:50.400321   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:52.900338   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:55.400304   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:57.899398   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:00.400254   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:02.400323   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:04.899587   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:06.399229   78899 node_ready.go:38] duration metric: took 6m0.000412603s for node "ha-872795" to be "Ready" ...
	I1002 20:52:06.401641   78899 out.go:203] 
	W1002 20:52:06.403684   78899 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:52:06.403700   78899 out.go:285] * 
	W1002 20:52:06.405327   78899 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:52:06.406669   78899 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.569487015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.569926845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.571761965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.572366383Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.586281422Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6f017519-ce8a-4a31-bf97-0371197b41aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.58778563Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fd99dcfb-3cc6-4bd5-bf8e-39e478c8802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.587830637Z" level=info msg="createCtr: deleting container ID b95f53b763239b3d38e09ee2814bcfb5b14cb72df331e0aeeb76a6ecc0c072d3 from idIndex" id=6f017519-ce8a-4a31-bf97-0371197b41aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.58786575Z" level=info msg="createCtr: removing container b95f53b763239b3d38e09ee2814bcfb5b14cb72df331e0aeeb76a6ecc0c072d3" id=6f017519-ce8a-4a31-bf97-0371197b41aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.587893875Z" level=info msg="createCtr: deleting container b95f53b763239b3d38e09ee2814bcfb5b14cb72df331e0aeeb76a6ecc0c072d3 from storage" id=6f017519-ce8a-4a31-bf97-0371197b41aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.589160977Z" level=info msg="createCtr: deleting container ID 926e29dc3fb1b02ca77bc8caca3996115624a20daf2dc23467eb8a47158b2cc5 from idIndex" id=fd99dcfb-3cc6-4bd5-bf8e-39e478c8802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.589189409Z" level=info msg="createCtr: removing container 926e29dc3fb1b02ca77bc8caca3996115624a20daf2dc23467eb8a47158b2cc5" id=fd99dcfb-3cc6-4bd5-bf8e-39e478c8802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.589213497Z" level=info msg="createCtr: deleting container 926e29dc3fb1b02ca77bc8caca3996115624a20daf2dc23467eb8a47158b2cc5 from storage" id=fd99dcfb-3cc6-4bd5-bf8e-39e478c8802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.591182504Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=6f017519-ce8a-4a31-bf97-0371197b41aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.591451342Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=fd99dcfb-3cc6-4bd5-bf8e-39e478c8802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.562898181Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3c4dd53b-4aa7-4821-994f-e95dd207f1a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.563898751Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=dfc42a93-e5ff-4b26-8da0-1022755692fb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.564860957Z" level=info msg="Creating container: kube-system/etcd-ha-872795/etcd" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.565165627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.569872373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.570450967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.584481668Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.585933252Z" level=info msg="createCtr: deleting container ID fbd54bc8ecff9d6e26b5b1c34068d48e0a20cc244639b7f0fabd5e26b979fe1a from idIndex" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.58596738Z" level=info msg="createCtr: removing container fbd54bc8ecff9d6e26b5b1c34068d48e0a20cc244639b7f0fabd5e26b979fe1a" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.585996491Z" level=info msg="createCtr: deleting container fbd54bc8ecff9d6e26b5b1c34068d48e0a20cc244639b7f0fabd5e26b979fe1a from storage" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.588421447Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:52:07.341664    2022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:07.342138    2022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:07.343732    2022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:07.344180    2022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:07.345706    2022 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:52:07 up  1:34,  0 user,  load average: 0.11, 0.17, 0.18
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:51:57 ha-872795 kubelet[676]: E1002 20:51:57.591583     676 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:51:57 ha-872795 kubelet[676]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:51:57 ha-872795 kubelet[676]:  > logger="UnhandledError"
	Oct 02 20:51:57 ha-872795 kubelet[676]: E1002 20:51:57.591713     676 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:51:57 ha-872795 kubelet[676]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:51:57 ha-872795 kubelet[676]:  > podSandboxID="1b4ad0dc47b9a4985d33b6746de5bf4b721859db36b8536da8cce1580502cea3"
	Oct 02 20:51:57 ha-872795 kubelet[676]: E1002 20:51:57.591738     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	Oct 02 20:51:57 ha-872795 kubelet[676]: E1002 20:51:57.591789     676 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:51:57 ha-872795 kubelet[676]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:51:57 ha-872795 kubelet[676]:  > logger="UnhandledError"
	Oct 02 20:51:57 ha-872795 kubelet[676]: E1002 20:51:57.592962     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:51:58 ha-872795 kubelet[676]: E1002 20:51:58.776093     676 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 20:52:01 ha-872795 kubelet[676]: E1002 20:52:01.202199     676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:52:01 ha-872795 kubelet[676]: I1002 20:52:01.362248     676 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:52:01 ha-872795 kubelet[676]: E1002 20:52:01.362625     676 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:52:05 ha-872795 kubelet[676]: E1002 20:52:05.252182     676 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac781fd00879a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:46:05.55097897 +0000 UTC m=+0.071290369,LastTimestamp:2025-10-02 20:46:05.55097897 +0000 UTC m=+0.071290369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:52:05 ha-872795 kubelet[676]: E1002 20:52:05.578096     676 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	Oct 02 20:52:06 ha-872795 kubelet[676]: E1002 20:52:06.562434     676 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:52:06 ha-872795 kubelet[676]: E1002 20:52:06.588743     676 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:52:06 ha-872795 kubelet[676]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:06 ha-872795 kubelet[676]:  > podSandboxID="b2335ec5342be7f98c7e4a4b5912aa55d23990fa851e2d58c3d4a39d85708c8d"
	Oct 02 20:52:06 ha-872795 kubelet[676]: E1002 20:52:06.588860     676 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:52:06 ha-872795 kubelet[676]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:06 ha-872795 kubelet[676]:  > logger="UnhandledError"
	Oct 02 20:52:06 ha-872795 kubelet[676]: E1002 20:52:06.588900     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 2 (287.946262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 node delete m03 --alsologtostderr -v 5: exit status 103 (240.772889ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-872795 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-872795"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:52:07.766839   83000 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:07.767113   83000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:07.767124   83000 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:07.767127   83000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:07.767331   83000 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:52:07.767603   83000 mustload.go:65] Loading cluster: ha-872795
	I1002 20:52:07.767915   83000 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:07.768271   83000 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:07.785459   83000 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:07.785766   83000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:07.837991   83000 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:52:07.82800441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:07.838124   83000 api_server.go:166] Checking apiserver status ...
	I1002 20:52:07.838172   83000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:52:07.838227   83000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:07.855148   83000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	W1002 20:52:07.957467   83000 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:52:07.959634   83000 out.go:179] * The control-plane node ha-872795 apiserver is not running: (state=Stopped)
	I1002 20:52:07.961086   83000 out.go:179]   To start a cluster, run: "minikube start -p ha-872795"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-872795 node delete m03 --alsologtostderr -v 5": exit status 103
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 2 (283.605415ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:52:08.008485   83095 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:08.008759   83095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:08.008769   83095 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:08.008774   83095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:08.008987   83095 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:52:08.009149   83095 out.go:368] Setting JSON to false
	I1002 20:52:08.009171   83095 mustload.go:65] Loading cluster: ha-872795
	I1002 20:52:08.009281   83095 notify.go:221] Checking for updates...
	I1002 20:52:08.009475   83095 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:08.009487   83095 status.go:174] checking status of ha-872795 ...
	I1002 20:52:08.009961   83095 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:08.029769   83095 status.go:371] ha-872795 host status = "Running" (err=<nil>)
	I1002 20:52:08.029796   83095 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:08.030029   83095 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:08.047311   83095 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:08.047541   83095 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:52:08.047587   83095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:08.065594   83095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:08.163619   83095 ssh_runner.go:195] Run: systemctl --version
	I1002 20:52:08.170056   83095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:52:08.181353   83095 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:08.234698   83095 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:52:08.223429684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:08.235200   83095 kubeconfig.go:125] found "ha-872795" server: "https://192.168.49.2:8443"
	I1002 20:52:08.235230   83095 api_server.go:166] Checking apiserver status ...
	I1002 20:52:08.235259   83095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 20:52:08.245621   83095 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:52:08.245642   83095 status.go:463] ha-872795 apiserver status = Running (err=<nil>)
	I1002 20:52:08.245674   83095 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 79098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:45:59.488616861Z",
	            "FinishedAt": "2025-10-02T20:45:58.36276989Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe270a11d29905a7aa21ceba3c673cad94096380a45185c01c96de7e6b75dbe7",
	            "SandboxKey": "/var/run/docker/netns/fe270a11d299",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:9d:1e:6a:fe:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "38ef7297bec941b34747e498d90575ca5f4bb864e58670d3487fa859f3f506b4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 2 (275.779499ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                      │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                              │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node start m02 --alsologtostderr -v 5                             │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                  │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                       │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │ 02 Oct 25 20:45 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5                          │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                  │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ node    │ ha-872795 node delete m03 --alsologtostderr -v 5                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:45:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:45:59.273214   78899 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:59.273492   78899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:59.273503   78899 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:59.273509   78899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:59.273755   78899 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:59.274203   78899 out.go:368] Setting JSON to false
	I1002 20:45:59.275100   78899 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5308,"bootTime":1759432651,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:45:59.275172   78899 start.go:140] virtualization: kvm guest
	I1002 20:45:59.277322   78899 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:45:59.278717   78899 notify.go:221] Checking for updates...
	I1002 20:45:59.278734   78899 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:45:59.280224   78899 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:45:59.281523   78899 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:59.282829   78899 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:45:59.283968   78899 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:45:59.285159   78899 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:45:59.286946   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:59.287045   78899 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:45:59.312895   78899 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:45:59.312963   78899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:59.365695   78899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:45:59.355393625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:45:59.365846   78899 docker.go:319] overlay module found
	I1002 20:45:59.367547   78899 out.go:179] * Using the docker driver based on existing profile
	I1002 20:45:59.368669   78899 start.go:306] selected driver: docker
	I1002 20:45:59.368691   78899 start.go:936] validating driver "docker" against &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:45:59.368764   78899 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:45:59.368835   78899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:59.420192   78899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:45:59.410429763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:45:59.420918   78899 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:45:59.420950   78899 cni.go:84] Creating CNI manager for ""
	I1002 20:45:59.420996   78899 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:45:59.421049   78899 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 20:45:59.422984   78899 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:45:59.424152   78899 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:45:59.425341   78899 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:45:59.426550   78899 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:45:59.426588   78899 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:45:59.426598   78899 cache.go:59] Caching tarball of preloaded images
	I1002 20:45:59.426671   78899 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:45:59.426737   78899 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:45:59.426752   78899 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:45:59.426839   78899 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:45:59.445659   78899 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:45:59.445684   78899 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:45:59.445705   78899 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:45:59.445727   78899 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:45:59.445817   78899 start.go:365] duration metric: took 46.032µs to acquireMachinesLock for "ha-872795"
	I1002 20:45:59.445849   78899 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:45:59.445859   78899 fix.go:55] fixHost starting: 
	I1002 20:45:59.446055   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:59.462065   78899 fix.go:113] recreateIfNeeded on ha-872795: state=Stopped err=<nil>
	W1002 20:45:59.462095   78899 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:45:59.463993   78899 out.go:252] * Restarting existing docker container for "ha-872795" ...
	I1002 20:45:59.464064   78899 cli_runner.go:164] Run: docker start ha-872795
	I1002 20:45:59.685107   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:59.703014   78899 kic.go:430] container "ha-872795" state is running.
	I1002 20:45:59.703476   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:59.721917   78899 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:45:59.722128   78899 machine.go:93] provisionDockerMachine start ...
	I1002 20:45:59.722199   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:59.740462   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:45:59.740703   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:45:59.740719   78899 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:45:59.741377   78899 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45756->127.0.0.1:32788: read: connection reset by peer
	I1002 20:46:02.885620   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:46:02.885643   78899 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:46:02.885724   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:02.903157   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:02.903362   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:02.903374   78899 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:46:03.053956   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:46:03.054038   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.071746   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:03.071971   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:03.071994   78899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:46:03.214048   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:46:03.214082   78899 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:46:03.214121   78899 ubuntu.go:190] setting up certificates
	I1002 20:46:03.214132   78899 provision.go:84] configureAuth start
	I1002 20:46:03.214197   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:46:03.231298   78899 provision.go:143] copyHostCerts
	I1002 20:46:03.231330   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:46:03.231366   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:46:03.231391   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:46:03.231472   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:46:03.231573   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:46:03.231600   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:46:03.231610   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:46:03.231673   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:46:03.231747   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:46:03.231769   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:46:03.231778   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:46:03.231823   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:46:03.231892   78899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:46:03.490166   78899 provision.go:177] copyRemoteCerts
	I1002 20:46:03.490221   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:46:03.490259   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.508435   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:03.609601   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:46:03.609667   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:46:03.626240   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:46:03.626304   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 20:46:03.642410   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:46:03.642458   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:46:03.658782   78899 provision.go:87] duration metric: took 444.634386ms to configureAuth
	I1002 20:46:03.658808   78899 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:46:03.658975   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:03.659073   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.676668   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:03.676868   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:03.676886   78899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:46:03.930147   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:46:03.930169   78899 machine.go:96] duration metric: took 4.208026772s to provisionDockerMachine
	I1002 20:46:03.930182   78899 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:46:03.930195   78899 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:46:03.930249   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:46:03.930307   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.947258   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.047956   78899 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:46:04.051422   78899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:46:04.051453   78899 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:46:04.051465   78899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:46:04.051521   78899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:46:04.051595   78899 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:46:04.051605   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:46:04.051733   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:46:04.059188   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:46:04.075417   78899 start.go:297] duration metric: took 145.220836ms for postStartSetup
	I1002 20:46:04.075487   78899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:46:04.075532   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.093129   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.191077   78899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:46:04.195739   78899 fix.go:57] duration metric: took 4.749874368s for fixHost
	I1002 20:46:04.195760   78899 start.go:84] releasing machines lock for "ha-872795", held for 4.749931233s
	I1002 20:46:04.195825   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:46:04.212606   78899 ssh_runner.go:195] Run: cat /version.json
	I1002 20:46:04.212673   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.212711   78899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:46:04.212768   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.230369   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.230715   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.379868   78899 ssh_runner.go:195] Run: systemctl --version
	I1002 20:46:04.386052   78899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:46:04.419376   78899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:46:04.424169   78899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:46:04.424233   78899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:46:04.431914   78899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:46:04.431932   78899 start.go:496] detecting cgroup driver to use...
	I1002 20:46:04.431960   78899 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:46:04.432004   78899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:46:04.445356   78899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:46:04.456824   78899 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:46:04.456874   78899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:46:04.470403   78899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:46:04.481638   78899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:46:04.557990   78899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:46:04.636555   78899 docker.go:234] disabling docker service ...
	I1002 20:46:04.636608   78899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:46:04.650153   78899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:46:04.662016   78899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:46:04.734613   78899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:46:04.811825   78899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:46:04.823641   78899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:46:04.837220   78899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:46:04.837279   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.845762   78899 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:46:04.845809   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.854146   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.862344   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.870401   78899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:46:04.878640   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.886882   78899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.894503   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.902512   78899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:46:04.909191   78899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:46:04.915764   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:04.993486   78899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:46:05.096845   78899 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:46:05.096913   78899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:46:05.100739   78899 start.go:564] Will wait 60s for crictl version
	I1002 20:46:05.100794   78899 ssh_runner.go:195] Run: which crictl
	I1002 20:46:05.104308   78899 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:46:05.127966   78899 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:46:05.128043   78899 ssh_runner.go:195] Run: crio --version
	I1002 20:46:05.154454   78899 ssh_runner.go:195] Run: crio --version
	I1002 20:46:05.182372   78899 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:46:05.183558   78899 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:46:05.200765   78899 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:46:05.204765   78899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:46:05.214588   78899 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:46:05.214721   78899 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:46:05.214780   78899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:46:05.245534   78899 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:46:05.245552   78899 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:46:05.245593   78899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:46:05.270550   78899 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:46:05.270570   78899 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:46:05.270577   78899 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:46:05.270681   78899 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:46:05.270753   78899 ssh_runner.go:195] Run: crio config
	I1002 20:46:05.313363   78899 cni.go:84] Creating CNI manager for ""
	I1002 20:46:05.313383   78899 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:46:05.313397   78899 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:46:05.313416   78899 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:46:05.313519   78899 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:46:05.313572   78899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:46:05.321352   78899 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:46:05.321406   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:46:05.328622   78899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:46:05.340520   78899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:46:05.352503   78899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:46:05.364256   78899 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:46:05.367691   78899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:46:05.376985   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:05.453441   78899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:05.477718   78899 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:46:05.477741   78899 certs.go:195] generating shared ca certs ...
	I1002 20:46:05.477762   78899 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.477898   78899 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:46:05.477934   78899 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:46:05.477943   78899 certs.go:257] generating profile certs ...
	I1002 20:46:05.478028   78899 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:46:05.478050   78899 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4
	I1002 20:46:05.478067   78899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:46:05.639131   78899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 ...
	I1002 20:46:05.639158   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4: {Name:mkfc40b7884f53bead483594047f8801d6c65008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.639360   78899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4 ...
	I1002 20:46:05.639377   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4: {Name:mkbc72faf4d67a50affdab4239091d17eab3b576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.639481   78899 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:46:05.639675   78899 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:46:05.639868   78899 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:46:05.639889   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:46:05.639909   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:46:05.639931   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:46:05.639955   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:46:05.639971   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:46:05.639988   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:46:05.640006   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:46:05.640023   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:46:05.640085   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:46:05.640129   78899 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:46:05.640142   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:46:05.640172   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:46:05.640204   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:46:05.640245   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:46:05.640297   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:46:05.640338   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.640356   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.640374   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.640909   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:46:05.658131   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:46:05.675243   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:46:05.691273   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:46:05.707405   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 20:46:05.723209   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:46:05.739016   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:46:05.755859   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:46:05.771787   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:46:05.788286   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:46:05.804179   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:46:05.820035   78899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:46:05.831579   78899 ssh_runner.go:195] Run: openssl version
	I1002 20:46:05.837409   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:46:05.845365   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.848827   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.848873   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.882807   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:46:05.890823   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:46:05.899057   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.902614   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.902684   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.935800   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:46:05.943470   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:46:05.951918   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.955342   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.955394   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.997712   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:46:06.007162   78899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:46:06.010895   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:46:06.051524   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:46:06.085577   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:46:06.119097   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:46:06.153217   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:46:06.186423   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:46:06.220190   78899 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:46:06.220256   78899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:46:06.220304   78899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:46:06.246811   78899 cri.go:89] found id: ""
	I1002 20:46:06.246881   78899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:46:06.254366   78899 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:46:06.254384   78899 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:46:06.254422   78899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:46:06.262225   78899 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:46:06.262586   78899 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:46:06.262726   78899 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "ha-872795" cluster setting kubeconfig missing "ha-872795" context setting]
	I1002 20:46:06.263079   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.263592   78899 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:46:06.264072   78899 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:46:06.264087   78899 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:46:06.264091   78899 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:46:06.264094   78899 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:46:06.264100   78899 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:46:06.264140   78899 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:46:06.264397   78899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:46:06.271713   78899 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:46:06.271741   78899 kubeadm.go:601] duration metric: took 17.352317ms to restartPrimaryControlPlane
	I1002 20:46:06.271749   78899 kubeadm.go:402] duration metric: took 51.569514ms to StartCluster
	I1002 20:46:06.271767   78899 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.271822   78899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:46:06.272244   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.272428   78899 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:46:06.272502   78899 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:46:06.272602   78899 addons.go:69] Setting storage-provisioner=true in profile "ha-872795"
	I1002 20:46:06.272624   78899 addons.go:238] Setting addon storage-provisioner=true in "ha-872795"
	I1002 20:46:06.272679   78899 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:46:06.272682   78899 addons.go:69] Setting default-storageclass=true in profile "ha-872795"
	I1002 20:46:06.272710   78899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-872795"
	I1002 20:46:06.272711   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:06.273007   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.273074   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.275849   78899 out.go:179] * Verifying Kubernetes components...
	I1002 20:46:06.277085   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:06.293700   78899 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:46:06.294029   78899 addons.go:238] Setting addon default-storageclass=true in "ha-872795"
	I1002 20:46:06.294066   78899 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:46:06.294514   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.296310   78899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:46:06.297578   78899 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:06.297591   78899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:46:06.297638   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:06.315893   78899 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:06.315919   78899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:46:06.315977   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:06.322842   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:06.338186   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:06.386402   78899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:06.398777   78899 node_ready.go:35] waiting up to 6m0s for node "ha-872795" to be "Ready" ...
	I1002 20:46:06.430020   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:06.444406   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:06.481919   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.481958   78899 retry.go:31] will retry after 309.807231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:06.497604   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.497632   78899 retry.go:31] will retry after 244.884641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.743097   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:06.792601   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:06.794112   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.794136   78899 retry.go:31] will retry after 533.883087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:06.842590   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.842621   78899 retry.go:31] will retry after 410.666568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.253624   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:07.305432   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.305458   78899 retry.go:31] will retry after 489.641892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.328610   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:07.380758   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.380795   78899 retry.go:31] will retry after 369.153465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.750784   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:07.795231   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:07.802320   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.802354   78899 retry.go:31] will retry after 900.902263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:07.846519   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.846552   78899 retry.go:31] will retry after 825.480637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:08.400289   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:08.672691   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:08.704184   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:08.723919   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:08.723951   78899 retry.go:31] will retry after 1.623242145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:08.754902   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:08.754942   78899 retry.go:31] will retry after 1.534997391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.290627   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:10.340323   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.340352   78899 retry.go:31] will retry after 1.072500895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.347501   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:10.397032   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.397058   78899 retry.go:31] will retry after 2.562445815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:10.899389   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:11.413692   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:11.465155   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:11.465197   78899 retry.go:31] will retry after 2.545749407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:12.900153   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:12.960290   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:13.011206   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:13.011233   78899 retry.go:31] will retry after 2.264218786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:14.011720   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:14.064198   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:14.064229   78899 retry.go:31] will retry after 5.430080707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:14.900209   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:15.275689   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:15.325885   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:15.325919   78899 retry.go:31] will retry after 5.718863405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:17.399470   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:19.399809   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:19.495047   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:19.546169   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:19.546212   78899 retry.go:31] will retry after 6.349030782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:21.045488   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:21.095479   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:21.095509   78899 retry.go:31] will retry after 4.412738231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:21.400327   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:23.899614   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:25.508453   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:25.560861   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.560888   78899 retry.go:31] will retry after 8.695034149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.896450   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:25.947240   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.947277   78899 retry.go:31] will retry after 14.217722553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:26.399408   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:28.400215   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:30.900092   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:33.400118   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:34.256598   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:34.309569   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:34.309630   78899 retry.go:31] will retry after 19.451161912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:35.899352   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:37.899781   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:40.165763   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:40.217670   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:40.217704   78899 retry.go:31] will retry after 8.892100881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:40.399315   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:42.399564   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:44.399980   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:46.899303   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:48.899477   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:49.110844   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:49.161863   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:49.161890   78899 retry.go:31] will retry after 18.08446926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:51.399432   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:53.399603   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:53.761037   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:53.812412   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:53.812439   78899 retry.go:31] will retry after 25.479513407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:55.899367   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:57.899529   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:00.399361   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:02.399525   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:04.899405   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:07.246772   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:07.299553   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:07.299581   78899 retry.go:31] will retry after 17.600869808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:07.400189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:09.899776   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:11.900115   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:14.399576   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:16.399745   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:18.400228   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:19.293047   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:47:19.344597   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:19.344630   78899 retry.go:31] will retry after 39.025323659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:20.899449   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:22.899645   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:24.900270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:24.901299   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:24.952227   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:24.952258   78899 retry.go:31] will retry after 34.385430665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:27.400196   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:29.899432   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:31.900050   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:34.400107   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:36.900095   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:39.400043   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:41.400189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:43.900178   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:46.399320   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:48.400172   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:50.900066   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:53.399988   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:55.400092   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:57.400223   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:58.370762   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:47:58.424525   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:58.424640   78899 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:47:59.338316   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:59.390212   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:59.390328   78899 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:47:59.392488   78899 out.go:179] * Enabled addons: 
	I1002 20:47:59.394002   78899 addons.go:514] duration metric: took 1m53.121505654s for enable addons: enabled=[]
	W1002 20:47:59.899586   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:02.400313   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:04.899323   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:06.900226   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:09.400085   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:11.900023   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:14.400129   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:16.400283   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:18.900265   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:21.400147   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:23.900093   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:26.399282   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:28.400196   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:30.400232   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:32.900280   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:35.399278   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:37.400223   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:39.400256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:41.900166   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:44.400175   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:46.400211   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:48.900081   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:50.900220   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:53.400165   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:55.900072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:58.400079   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:00.900059   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:03.400072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:05.400105   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:07.400152   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:09.400199   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:11.900093   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:13.900189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:16.399380   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:18.400192   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:20.900188   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:23.400053   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:25.400271   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:27.900209   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:29.900422   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:32.400234   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:34.900235   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:37.400208   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:39.900090   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:42.400075   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:44.900256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:47.400250   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:49.400299   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:51.900262   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:54.399328   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:56.399370   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:58.400237   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:00.900256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:03.400303   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:05.900106   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:08.399977   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:10.400162   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:12.900033   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:14.900100   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:16.900212   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:19.400248   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:21.900062   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:24.400122   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:26.400193   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:28.900088   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:30.900230   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:33.400099   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:35.900072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:37.900203   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:39.900270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:42.400176   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:44.400277   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:46.900306   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:49.399270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:51.400304   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:53.900287   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:56.400212   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:58.900133   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:00.900311   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:03.400234   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:05.900198   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:08.400064   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:10.900046   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:13.399369   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:15.400276   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:17.900327   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:20.400235   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:22.900085   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:24.900199   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:27.400267   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:29.899494   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:31.900351   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:34.399288   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:36.399420   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:38.400249   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:40.900211   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:43.400231   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:45.400275   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:47.900301   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:50.400321   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:52.900338   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:55.400304   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:57.899398   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:00.400254   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:02.400323   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:04.899587   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:06.399229   78899 node_ready.go:38] duration metric: took 6m0.000412603s for node "ha-872795" to be "Ready" ...
	I1002 20:52:06.401641   78899 out.go:203] 
	W1002 20:52:06.403684   78899 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:52:06.403700   78899 out.go:285] * 
	W1002 20:52:06.405327   78899 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:52:06.406669   78899 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.589213497Z" level=info msg="createCtr: deleting container 926e29dc3fb1b02ca77bc8caca3996115624a20daf2dc23467eb8a47158b2cc5 from storage" id=fd99dcfb-3cc6-4bd5-bf8e-39e478c8802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.591182504Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=6f017519-ce8a-4a31-bf97-0371197b41aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.591451342Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=fd99dcfb-3cc6-4bd5-bf8e-39e478c8802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.562898181Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3c4dd53b-4aa7-4821-994f-e95dd207f1a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.563898751Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=dfc42a93-e5ff-4b26-8da0-1022755692fb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.564860957Z" level=info msg="Creating container: kube-system/etcd-ha-872795/etcd" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.565165627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.569872373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.570450967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.584481668Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.585933252Z" level=info msg="createCtr: deleting container ID fbd54bc8ecff9d6e26b5b1c34068d48e0a20cc244639b7f0fabd5e26b979fe1a from idIndex" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.58596738Z" level=info msg="createCtr: removing container fbd54bc8ecff9d6e26b5b1c34068d48e0a20cc244639b7f0fabd5e26b979fe1a" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.585996491Z" level=info msg="createCtr: deleting container fbd54bc8ecff9d6e26b5b1c34068d48e0a20cc244639b7f0fabd5e26b979fe1a from storage" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.588421447Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.564473469Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=cac4474a-4d7c-444e-8ede-0863b59c26b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.565403616Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e7adee59-7047-4d3c-93a1-737fd285c3bc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.566517211Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-872795/kube-controller-manager" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.566822267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.571617017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.57220408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.583830654Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.585171504Z" level=info msg="createCtr: deleting container ID d3db92ed45d66bf0f2fc46c231cb760dc609d67af1141b9e4591763ecbc3b547 from idIndex" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.585201926Z" level=info msg="createCtr: removing container d3db92ed45d66bf0f2fc46c231cb760dc609d67af1141b9e4591763ecbc3b547" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.585231054Z" level=info msg="createCtr: deleting container d3db92ed45d66bf0f2fc46c231cb760dc609d67af1141b9e4591763ecbc3b547 from storage" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.587360955Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:52:09.085052    2212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:09.085503    2212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:09.087094    2212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:09.087554    2212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:09.089086    2212 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:52:09 up  1:34,  0 user,  load average: 0.10, 0.16, 0.18
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:52:01 ha-872795 kubelet[676]: E1002 20:52:01.202199     676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:52:01 ha-872795 kubelet[676]: I1002 20:52:01.362248     676 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:52:01 ha-872795 kubelet[676]: E1002 20:52:01.362625     676 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:52:05 ha-872795 kubelet[676]: E1002 20:52:05.252182     676 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac781fd00879a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:46:05.55097897 +0000 UTC m=+0.071290369,LastTimestamp:2025-10-02 20:46:05.55097897 +0000 UTC m=+0.071290369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:52:05 ha-872795 kubelet[676]: E1002 20:52:05.578096     676 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	Oct 02 20:52:06 ha-872795 kubelet[676]: E1002 20:52:06.562434     676 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:52:06 ha-872795 kubelet[676]: E1002 20:52:06.588743     676 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:52:06 ha-872795 kubelet[676]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:06 ha-872795 kubelet[676]:  > podSandboxID="b2335ec5342be7f98c7e4a4b5912aa55d23990fa851e2d58c3d4a39d85708c8d"
	Oct 02 20:52:06 ha-872795 kubelet[676]: E1002 20:52:06.588860     676 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:52:06 ha-872795 kubelet[676]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:06 ha-872795 kubelet[676]:  > logger="UnhandledError"
	Oct 02 20:52:06 ha-872795 kubelet[676]: E1002 20:52:06.588900     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.068288     676 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.203608     676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:52:08 ha-872795 kubelet[676]: I1002 20:52:08.364457     676 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.364873     676 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.562146     676 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.587630     676 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:52:08 ha-872795 kubelet[676]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:08 ha-872795 kubelet[676]:  > podSandboxID="1b4ad0dc47b9a4985d33b6746de5bf4b721859db36b8536da8cce1580502cea3"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.587770     676 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:52:08 ha-872795 kubelet[676]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:08 ha-872795 kubelet[676]:  > logger="UnhandledError"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.587800     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 2 (281.950824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-872795" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-872795\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-872795\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-872795\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 79098,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:45:59.488616861Z",
	            "FinishedAt": "2025-10-02T20:45:58.36276989Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fe270a11d29905a7aa21ceba3c673cad94096380a45185c01c96de7e6b75dbe7",
	            "SandboxKey": "/var/run/docker/netns/fe270a11d299",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:9d:1e:6a:fe:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "38ef7297bec941b34747e498d90575ca5f4bb864e58670d3487fa859f3f506b4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 2 (280.185797ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-872795 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- rollout status deployment/busybox                      │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                              │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node start m02 --alsologtostderr -v 5                             │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                  │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                       │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │ 02 Oct 25 20:45 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5                          │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                  │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ node    │ ha-872795 node delete m03 --alsologtostderr -v 5                            │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:45:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:45:59.273214   78899 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:45:59.273492   78899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:59.273503   78899 out.go:374] Setting ErrFile to fd 2...
	I1002 20:45:59.273509   78899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:45:59.273755   78899 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:45:59.274203   78899 out.go:368] Setting JSON to false
	I1002 20:45:59.275100   78899 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5308,"bootTime":1759432651,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:45:59.275172   78899 start.go:140] virtualization: kvm guest
	I1002 20:45:59.277322   78899 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:45:59.278717   78899 notify.go:221] Checking for updates...
	I1002 20:45:59.278734   78899 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:45:59.280224   78899 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:45:59.281523   78899 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:45:59.282829   78899 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:45:59.283968   78899 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:45:59.285159   78899 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:45:59.286946   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:45:59.287045   78899 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:45:59.312895   78899 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:45:59.312963   78899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:59.365695   78899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:45:59.355393625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:45:59.365846   78899 docker.go:319] overlay module found
	I1002 20:45:59.367547   78899 out.go:179] * Using the docker driver based on existing profile
	I1002 20:45:59.368669   78899 start.go:306] selected driver: docker
	I1002 20:45:59.368691   78899 start.go:936] validating driver "docker" against &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:45:59.368764   78899 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:45:59.368835   78899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:45:59.420192   78899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:45:59.410429763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:45:59.420918   78899 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:45:59.420950   78899 cni.go:84] Creating CNI manager for ""
	I1002 20:45:59.420996   78899 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:45:59.421049   78899 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 20:45:59.422984   78899 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:45:59.424152   78899 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:45:59.425341   78899 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:45:59.426550   78899 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:45:59.426588   78899 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:45:59.426598   78899 cache.go:59] Caching tarball of preloaded images
	I1002 20:45:59.426671   78899 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:45:59.426737   78899 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:45:59.426752   78899 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:45:59.426839   78899 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:45:59.445659   78899 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:45:59.445684   78899 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:45:59.445705   78899 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:45:59.445727   78899 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:45:59.445817   78899 start.go:365] duration metric: took 46.032µs to acquireMachinesLock for "ha-872795"
	I1002 20:45:59.445849   78899 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:45:59.445859   78899 fix.go:55] fixHost starting: 
	I1002 20:45:59.446055   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:59.462065   78899 fix.go:113] recreateIfNeeded on ha-872795: state=Stopped err=<nil>
	W1002 20:45:59.462095   78899 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:45:59.463993   78899 out.go:252] * Restarting existing docker container for "ha-872795" ...
	I1002 20:45:59.464064   78899 cli_runner.go:164] Run: docker start ha-872795
	I1002 20:45:59.685107   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:45:59.703014   78899 kic.go:430] container "ha-872795" state is running.
	I1002 20:45:59.703476   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:45:59.721917   78899 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:45:59.722128   78899 machine.go:93] provisionDockerMachine start ...
	I1002 20:45:59.722199   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:45:59.740462   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:45:59.740703   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:45:59.740719   78899 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:45:59.741377   78899 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45756->127.0.0.1:32788: read: connection reset by peer
	I1002 20:46:02.885620   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:46:02.885643   78899 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:46:02.885724   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:02.903157   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:02.903362   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:02.903374   78899 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:46:03.053956   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:46:03.054038   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.071746   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:03.071971   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:03.071994   78899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:46:03.214048   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:46:03.214082   78899 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:46:03.214121   78899 ubuntu.go:190] setting up certificates
	I1002 20:46:03.214132   78899 provision.go:84] configureAuth start
	I1002 20:46:03.214197   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:46:03.231298   78899 provision.go:143] copyHostCerts
	I1002 20:46:03.231330   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:46:03.231366   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:46:03.231391   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:46:03.231472   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:46:03.231573   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:46:03.231600   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:46:03.231610   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:46:03.231673   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:46:03.231747   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:46:03.231769   78899 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:46:03.231778   78899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:46:03.231823   78899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:46:03.231892   78899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:46:03.490166   78899 provision.go:177] copyRemoteCerts
	I1002 20:46:03.490221   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:46:03.490259   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.508435   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:03.609601   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:46:03.609667   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:46:03.626240   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:46:03.626304   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 20:46:03.642410   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:46:03.642458   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:46:03.658782   78899 provision.go:87] duration metric: took 444.634386ms to configureAuth
	I1002 20:46:03.658808   78899 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:46:03.658975   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:03.659073   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.676668   78899 main.go:141] libmachine: Using SSH client type: native
	I1002 20:46:03.676868   78899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 20:46:03.676886   78899 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:46:03.930147   78899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:46:03.930169   78899 machine.go:96] duration metric: took 4.208026772s to provisionDockerMachine
	I1002 20:46:03.930182   78899 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:46:03.930195   78899 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:46:03.930249   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:46:03.930307   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:03.947258   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.047956   78899 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:46:04.051422   78899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:46:04.051453   78899 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:46:04.051465   78899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:46:04.051521   78899 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:46:04.051595   78899 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:46:04.051605   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:46:04.051733   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:46:04.059188   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:46:04.075417   78899 start.go:297] duration metric: took 145.220836ms for postStartSetup
	I1002 20:46:04.075487   78899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:46:04.075532   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.093129   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.191077   78899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:46:04.195739   78899 fix.go:57] duration metric: took 4.749874368s for fixHost
	I1002 20:46:04.195760   78899 start.go:84] releasing machines lock for "ha-872795", held for 4.749931233s
	I1002 20:46:04.195825   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:46:04.212606   78899 ssh_runner.go:195] Run: cat /version.json
	I1002 20:46:04.212673   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.212711   78899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:46:04.212768   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:04.230369   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.230715   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:04.379868   78899 ssh_runner.go:195] Run: systemctl --version
	I1002 20:46:04.386052   78899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:46:04.419376   78899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:46:04.424169   78899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:46:04.424233   78899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:46:04.431914   78899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:46:04.431932   78899 start.go:496] detecting cgroup driver to use...
	I1002 20:46:04.431960   78899 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:46:04.432004   78899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:46:04.445356   78899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:46:04.456824   78899 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:46:04.456874   78899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:46:04.470403   78899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:46:04.481638   78899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:46:04.557990   78899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:46:04.636555   78899 docker.go:234] disabling docker service ...
	I1002 20:46:04.636608   78899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:46:04.650153   78899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:46:04.662016   78899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:46:04.734613   78899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:46:04.811825   78899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:46:04.823641   78899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:46:04.837220   78899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:46:04.837279   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.845762   78899 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:46:04.845809   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.854146   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.862344   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.870401   78899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:46:04.878640   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.886882   78899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.894503   78899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:46:04.902512   78899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:46:04.909191   78899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:46:04.915764   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:04.993486   78899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:46:05.096845   78899 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:46:05.096913   78899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:46:05.100739   78899 start.go:564] Will wait 60s for crictl version
	I1002 20:46:05.100794   78899 ssh_runner.go:195] Run: which crictl
	I1002 20:46:05.104308   78899 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:46:05.127966   78899 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:46:05.128043   78899 ssh_runner.go:195] Run: crio --version
	I1002 20:46:05.154454   78899 ssh_runner.go:195] Run: crio --version
	I1002 20:46:05.182372   78899 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:46:05.183558   78899 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:46:05.200765   78899 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:46:05.204765   78899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:46:05.214588   78899 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:46:05.214721   78899 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:46:05.214780   78899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:46:05.245534   78899 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:46:05.245552   78899 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:46:05.245593   78899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:46:05.270550   78899 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:46:05.270570   78899 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:46:05.270577   78899 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:46:05.270681   78899 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:46:05.270753   78899 ssh_runner.go:195] Run: crio config
	I1002 20:46:05.313363   78899 cni.go:84] Creating CNI manager for ""
	I1002 20:46:05.313383   78899 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:46:05.313397   78899 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:46:05.313416   78899 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:46:05.313519   78899 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:46:05.313572   78899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:46:05.321352   78899 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:46:05.321406   78899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:46:05.328622   78899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:46:05.340520   78899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:46:05.352503   78899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:46:05.364256   78899 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:46:05.367691   78899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:46:05.376985   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:05.453441   78899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:05.477718   78899 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:46:05.477741   78899 certs.go:195] generating shared ca certs ...
	I1002 20:46:05.477762   78899 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.477898   78899 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:46:05.477934   78899 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:46:05.477943   78899 certs.go:257] generating profile certs ...
	I1002 20:46:05.478028   78899 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:46:05.478050   78899 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4
	I1002 20:46:05.478067   78899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:46:05.639131   78899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 ...
	I1002 20:46:05.639158   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4: {Name:mkfc40b7884f53bead483594047f8801d6c65008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.639360   78899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4 ...
	I1002 20:46:05.639377   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4: {Name:mkbc72faf4d67a50affdab4239091d17eab3b576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:05.639481   78899 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt.da7297d4 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt
	I1002 20:46:05.639675   78899 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key
	I1002 20:46:05.639868   78899 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:46:05.639889   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:46:05.639909   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:46:05.639931   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:46:05.639955   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:46:05.639971   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:46:05.639988   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:46:05.640006   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:46:05.640023   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:46:05.640085   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:46:05.640129   78899 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:46:05.640142   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:46:05.640172   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:46:05.640204   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:46:05.640245   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:46:05.640297   78899 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:46:05.640338   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.640356   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.640374   78899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.640909   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:46:05.658131   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:46:05.675243   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:46:05.691273   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:46:05.707405   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 20:46:05.723209   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:46:05.739016   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:46:05.755859   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:46:05.771787   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:46:05.788286   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:46:05.804179   78899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:46:05.820035   78899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:46:05.831579   78899 ssh_runner.go:195] Run: openssl version
	I1002 20:46:05.837409   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:46:05.845365   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.848827   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.848873   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:46:05.882807   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:46:05.890823   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:46:05.899057   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.902614   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.902684   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:46:05.935800   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:46:05.943470   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:46:05.951918   78899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.955342   78899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.955394   78899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:46:05.997712   78899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:46:06.007162   78899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:46:06.010895   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:46:06.051524   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:46:06.085577   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:46:06.119097   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:46:06.153217   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:46:06.186423   78899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:46:06.220190   78899 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:46:06.220256   78899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:46:06.220304   78899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:46:06.246811   78899 cri.go:89] found id: ""
	I1002 20:46:06.246881   78899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:46:06.254366   78899 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:46:06.254384   78899 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:46:06.254422   78899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:46:06.262225   78899 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:46:06.262586   78899 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:46:06.262726   78899 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "ha-872795" cluster setting kubeconfig missing "ha-872795" context setting]
	I1002 20:46:06.263079   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.263592   78899 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:46:06.264072   78899 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:46:06.264087   78899 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:46:06.264091   78899 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:46:06.264094   78899 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:46:06.264100   78899 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:46:06.264140   78899 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:46:06.264397   78899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:46:06.271713   78899 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:46:06.271741   78899 kubeadm.go:601] duration metric: took 17.352317ms to restartPrimaryControlPlane
	I1002 20:46:06.271749   78899 kubeadm.go:402] duration metric: took 51.569514ms to StartCluster
	I1002 20:46:06.271767   78899 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.271822   78899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:46:06.272244   78899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:46:06.272428   78899 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:46:06.272502   78899 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:46:06.272602   78899 addons.go:69] Setting storage-provisioner=true in profile "ha-872795"
	I1002 20:46:06.272624   78899 addons.go:238] Setting addon storage-provisioner=true in "ha-872795"
	I1002 20:46:06.272679   78899 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:46:06.272682   78899 addons.go:69] Setting default-storageclass=true in profile "ha-872795"
	I1002 20:46:06.272710   78899 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-872795"
	I1002 20:46:06.272711   78899 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:46:06.273007   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.273074   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.275849   78899 out.go:179] * Verifying Kubernetes components...
	I1002 20:46:06.277085   78899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:46:06.293700   78899 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:46:06.294029   78899 addons.go:238] Setting addon default-storageclass=true in "ha-872795"
	I1002 20:46:06.294066   78899 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:46:06.294514   78899 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:46:06.296310   78899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:46:06.297578   78899 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:06.297591   78899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:46:06.297638   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:06.315893   78899 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:06.315919   78899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:46:06.315977   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:46:06.322842   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:06.338186   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:46:06.386402   78899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:46:06.398777   78899 node_ready.go:35] waiting up to 6m0s for node "ha-872795" to be "Ready" ...
	I1002 20:46:06.430020   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:06.444406   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:06.481919   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.481958   78899 retry.go:31] will retry after 309.807231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:06.497604   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.497632   78899 retry.go:31] will retry after 244.884641ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.743097   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:06.792601   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:06.794112   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.794136   78899 retry.go:31] will retry after 533.883087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:06.842590   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:06.842621   78899 retry.go:31] will retry after 410.666568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.253624   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:07.305432   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.305458   78899 retry.go:31] will retry after 489.641892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.328610   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:07.380758   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.380795   78899 retry.go:31] will retry after 369.153465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.750784   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:46:07.795231   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:07.802320   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.802354   78899 retry.go:31] will retry after 900.902263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:07.846519   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:07.846552   78899 retry.go:31] will retry after 825.480637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:08.400289   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:08.672691   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:46:08.704184   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:08.723919   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:08.723951   78899 retry.go:31] will retry after 1.623242145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:08.754902   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:08.754942   78899 retry.go:31] will retry after 1.534997391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.290627   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:10.340323   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.340352   78899 retry.go:31] will retry after 1.072500895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.347501   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:10.397032   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:10.397058   78899 retry.go:31] will retry after 2.562445815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:10.899389   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:11.413692   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:11.465155   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:11.465197   78899 retry.go:31] will retry after 2.545749407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:12.900153   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:12.960290   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:13.011206   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:13.011233   78899 retry.go:31] will retry after 2.264218786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:14.011720   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:14.064198   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:14.064229   78899 retry.go:31] will retry after 5.430080707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:14.900209   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:15.275689   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:15.325885   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:15.325919   78899 retry.go:31] will retry after 5.718863405s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:17.399470   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:19.399809   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:19.495047   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:19.546169   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:19.546212   78899 retry.go:31] will retry after 6.349030782s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:21.045488   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:21.095479   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:21.095509   78899 retry.go:31] will retry after 4.412738231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:21.400327   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:23.899614   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:25.508453   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:25.560861   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.560888   78899 retry.go:31] will retry after 8.695034149s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.896450   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:25.947240   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:25.947277   78899 retry.go:31] will retry after 14.217722553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:26.399408   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:28.400215   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:30.900092   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:33.400118   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:34.256598   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:34.309569   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:34.309630   78899 retry.go:31] will retry after 19.451161912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:35.899352   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:37.899781   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:40.165763   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:40.217670   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:40.217704   78899 retry.go:31] will retry after 8.892100881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:40.399315   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:42.399564   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:44.399980   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:46.899303   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:48.899477   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:49.110844   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:46:49.161863   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:49.161890   78899 retry.go:31] will retry after 18.08446926s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:51.399432   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:53.399603   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:46:53.761037   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:46:53.812412   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:46:53.812439   78899 retry.go:31] will retry after 25.479513407s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:46:55.899367   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:46:57.899529   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:00.399361   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:02.399525   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:04.899405   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:07.246772   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:07.299553   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:07.299581   78899 retry.go:31] will retry after 17.600869808s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:07.400189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:09.899776   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:11.900115   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:14.399576   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:16.399745   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:18.400228   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:19.293047   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:47:19.344597   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:19.344630   78899 retry.go:31] will retry after 39.025323659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:20.899449   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:22.899645   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:24.900270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:24.901299   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:24.952227   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:47:24.952258   78899 retry.go:31] will retry after 34.385430665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:27.400196   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:29.899432   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:31.900050   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:34.400107   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:36.900095   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:39.400043   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:41.400189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:43.900178   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:46.399320   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:48.400172   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:50.900066   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:53.399988   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:55.400092   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:47:57.400223   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:47:58.370762   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:47:58.424525   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:58.424640   78899 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:47:59.338316   78899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:47:59.390212   78899 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:47:59.390328   78899 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:47:59.392488   78899 out.go:179] * Enabled addons: 
	I1002 20:47:59.394002   78899 addons.go:514] duration metric: took 1m53.121505654s for enable addons: enabled=[]
	W1002 20:47:59.899586   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:02.400313   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:04.899323   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:06.900226   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:09.400085   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:11.900023   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:14.400129   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:16.400283   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:18.900265   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:21.400147   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:23.900093   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:26.399282   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:28.400196   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:30.400232   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:32.900280   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:35.399278   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:37.400223   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:39.400256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:41.900166   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:44.400175   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:46.400211   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:48.900081   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:50.900220   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:53.400165   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:55.900072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:48:58.400079   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:00.900059   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:03.400072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:05.400105   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:07.400152   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:09.400199   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:11.900093   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:13.900189   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:16.399380   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:18.400192   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:20.900188   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:23.400053   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:25.400271   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:27.900209   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:29.900422   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:32.400234   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:34.900235   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:37.400208   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:39.900090   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:42.400075   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:44.900256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:47.400250   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:49.400299   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:51.900262   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:54.399328   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:56.399370   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:49:58.400237   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:00.900256   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:03.400303   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:05.900106   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:08.399977   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:10.400162   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:12.900033   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:14.900100   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:16.900212   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:19.400248   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:21.900062   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:24.400122   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:26.400193   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:28.900088   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:30.900230   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:33.400099   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:35.900072   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:37.900203   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:39.900270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:42.400176   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:44.400277   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:46.900306   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:49.399270   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:51.400304   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:53.900287   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:56.400212   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:50:58.900133   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:00.900311   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:03.400234   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:05.900198   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:08.400064   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:10.900046   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:13.399369   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:15.400276   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:17.900327   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:20.400235   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:22.900085   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:24.900199   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:27.400267   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:29.899494   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:31.900351   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:34.399288   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:36.399420   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:38.400249   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:40.900211   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:43.400231   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:45.400275   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:47.900301   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:50.400321   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:52.900338   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:55.400304   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:51:57.899398   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:00.400254   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:02.400323   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:04.899587   78899 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:06.399229   78899 node_ready.go:38] duration metric: took 6m0.000412603s for node "ha-872795" to be "Ready" ...
	I1002 20:52:06.401641   78899 out.go:203] 
	W1002 20:52:06.403684   78899 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:52:06.403700   78899 out.go:285] * 
	W1002 20:52:06.405327   78899 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:52:06.406669   78899 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.589213497Z" level=info msg="createCtr: deleting container 926e29dc3fb1b02ca77bc8caca3996115624a20daf2dc23467eb8a47158b2cc5 from storage" id=fd99dcfb-3cc6-4bd5-bf8e-39e478c8802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.591182504Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=6f017519-ce8a-4a31-bf97-0371197b41aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:51:57 ha-872795 crio[527]: time="2025-10-02T20:51:57.591451342Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=fd99dcfb-3cc6-4bd5-bf8e-39e478c8802d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.562898181Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3c4dd53b-4aa7-4821-994f-e95dd207f1a1 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.563898751Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=dfc42a93-e5ff-4b26-8da0-1022755692fb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.564860957Z" level=info msg="Creating container: kube-system/etcd-ha-872795/etcd" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.565165627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.569872373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.570450967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.584481668Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.585933252Z" level=info msg="createCtr: deleting container ID fbd54bc8ecff9d6e26b5b1c34068d48e0a20cc244639b7f0fabd5e26b979fe1a from idIndex" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.58596738Z" level=info msg="createCtr: removing container fbd54bc8ecff9d6e26b5b1c34068d48e0a20cc244639b7f0fabd5e26b979fe1a" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.585996491Z" level=info msg="createCtr: deleting container fbd54bc8ecff9d6e26b5b1c34068d48e0a20cc244639b7f0fabd5e26b979fe1a from storage" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:06 ha-872795 crio[527]: time="2025-10-02T20:52:06.588421447Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=414cdc3c-c380-4d49-93b4-eb93371dc697 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.564473469Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=cac4474a-4d7c-444e-8ede-0863b59c26b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.565403616Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e7adee59-7047-4d3c-93a1-737fd285c3bc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.566517211Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-872795/kube-controller-manager" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.566822267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.571617017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.57220408Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.583830654Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.585171504Z" level=info msg="createCtr: deleting container ID d3db92ed45d66bf0f2fc46c231cb760dc609d67af1141b9e4591763ecbc3b547 from idIndex" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.585201926Z" level=info msg="createCtr: removing container d3db92ed45d66bf0f2fc46c231cb760dc609d67af1141b9e4591763ecbc3b547" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.585231054Z" level=info msg="createCtr: deleting container d3db92ed45d66bf0f2fc46c231cb760dc609d67af1141b9e4591763ecbc3b547 from storage" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:52:08 ha-872795 crio[527]: time="2025-10-02T20:52:08.587360955Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=857f1b47-7c39-4f66-bc0a-6649d6fd5331 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:52:10.598515    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:10.599498    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:10.600612    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:10.601089    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:52:10.602189    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:52:10 up  1:34,  0 user,  load average: 0.10, 0.16, 0.18
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.364873     676 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.562146     676 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.587630     676 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:52:08 ha-872795 kubelet[676]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:08 ha-872795 kubelet[676]:  > podSandboxID="1b4ad0dc47b9a4985d33b6746de5bf4b721859db36b8536da8cce1580502cea3"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.587770     676 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:52:08 ha-872795 kubelet[676]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:08 ha-872795 kubelet[676]:  > logger="UnhandledError"
	Oct 02 20:52:08 ha-872795 kubelet[676]: E1002 20:52:08.587800     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:52:10 ha-872795 kubelet[676]: E1002 20:52:10.562571     676 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:52:10 ha-872795 kubelet[676]: E1002 20:52:10.562720     676 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:52:10 ha-872795 kubelet[676]: E1002 20:52:10.593329     676 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:52:10 ha-872795 kubelet[676]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:10 ha-872795 kubelet[676]:  > podSandboxID="95e9a0598c41c175bf2887ba3a4f12f54f9a77ba59f51cd7bb9467c46f2184ca"
	Oct 02 20:52:10 ha-872795 kubelet[676]: E1002 20:52:10.593446     676 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:52:10 ha-872795 kubelet[676]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:10 ha-872795 kubelet[676]:  > logger="UnhandledError"
	Oct 02 20:52:10 ha-872795 kubelet[676]: E1002 20:52:10.593467     676 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:52:10 ha-872795 kubelet[676]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:10 ha-872795 kubelet[676]:  > podSandboxID="fbba468a6b8da8eee62b30884a6443ce7a662e72cd83bfaeb394f9cb6930ca74"
	Oct 02 20:52:10 ha-872795 kubelet[676]: E1002 20:52:10.593482     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	Oct 02 20:52:10 ha-872795 kubelet[676]: E1002 20:52:10.593545     676 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:52:10 ha-872795 kubelet[676]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:52:10 ha-872795 kubelet[676]:  > logger="UnhandledError"
	Oct 02 20:52:10 ha-872795 kubelet[676]: E1002 20:52:10.594709     676 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 2 (277.070599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-872795 stop --alsologtostderr -v 5: (1.203961639s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5: exit status 7 (63.66606ms)

                                                
                                                
-- stdout --
	ha-872795
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:52:12.204535   84485 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:12.204821   84485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.204831   84485 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:12.204835   84485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.205049   84485 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:52:12.205276   84485 out.go:368] Setting JSON to false
	I1002 20:52:12.205303   84485 mustload.go:65] Loading cluster: ha-872795
	I1002 20:52:12.205340   84485 notify.go:221] Checking for updates...
	I1002 20:52:12.205704   84485 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:12.205719   84485 status.go:174] checking status of ha-872795 ...
	I1002 20:52:12.206186   84485 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.223761   84485 status.go:371] ha-872795 host status = "Stopped" (err=<nil>)
	I1002 20:52:12.223798   84485 status.go:384] host is not running, skipping remaining checks
	I1002 20:52:12.223808   84485 status.go:176] ha-872795 status: &{Name:ha-872795 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5": ha-872795
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5": ha-872795
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-872795 status --alsologtostderr -v 5": ha-872795
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:45:59.488616861Z",
	            "FinishedAt": "2025-10-02T20:52:11.285843333Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 7 (65.211377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-872795" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (368.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 20:52:17.202772   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 20:55:54.124120   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m7.229589637s)

                                                
                                                
-- stdout --
	* [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:52:12.352153   84542 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:12.352281   84542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.352291   84542 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:12.352298   84542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.353016   84542 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:52:12.353847   84542 out.go:368] Setting JSON to false
	I1002 20:52:12.354816   84542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5681,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:52:12.354901   84542 start.go:140] virtualization: kvm guest
	I1002 20:52:12.356608   84542 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:52:12.358039   84542 notify.go:221] Checking for updates...
	I1002 20:52:12.358067   84542 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:52:12.359475   84542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:52:12.360841   84542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:12.362132   84542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:52:12.363282   84542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:52:12.364343   84542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:52:12.365896   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:12.366331   84542 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:52:12.389014   84542 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:52:12.389115   84542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:12.440987   84542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:52:12.431594508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:12.441088   84542 docker.go:319] overlay module found
	I1002 20:52:12.443751   84542 out.go:179] * Using the docker driver based on existing profile
	I1002 20:52:12.444967   84542 start.go:306] selected driver: docker
	I1002 20:52:12.444981   84542 start.go:936] validating driver "docker" against &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:12.445063   84542 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:52:12.445136   84542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:12.499692   84542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:52:12.49002335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:12.500567   84542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:52:12.500599   84542 cni.go:84] Creating CNI manager for ""
	I1002 20:52:12.500669   84542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:52:12.500729   84542 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 20:52:12.503553   84542 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:52:12.504787   84542 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:52:12.505884   84542 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:52:12.506921   84542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:52:12.506957   84542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:52:12.506974   84542 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:52:12.506986   84542 cache.go:59] Caching tarball of preloaded images
	I1002 20:52:12.507092   84542 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:52:12.507108   84542 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:52:12.507207   84542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:52:12.527120   84542 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:52:12.527147   84542 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:52:12.527169   84542 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:52:12.527198   84542 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:52:12.527256   84542 start.go:365] duration metric: took 40.003µs to acquireMachinesLock for "ha-872795"
	I1002 20:52:12.527279   84542 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:52:12.527287   84542 fix.go:55] fixHost starting: 
	I1002 20:52:12.527480   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.544385   84542 fix.go:113] recreateIfNeeded on ha-872795: state=Stopped err=<nil>
	W1002 20:52:12.544415   84542 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:52:12.546060   84542 out.go:252] * Restarting existing docker container for "ha-872795" ...
	I1002 20:52:12.546129   84542 cli_runner.go:164] Run: docker start ha-872795
	I1002 20:52:12.772245   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.791338   84542 kic.go:430] container "ha-872795" state is running.
	I1002 20:52:12.791742   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:12.809326   84542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:52:12.809517   84542 machine.go:93] provisionDockerMachine start ...
	I1002 20:52:12.809567   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:12.827593   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:12.827887   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:12.827902   84542 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:52:12.828625   84542 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54168->127.0.0.1:32793: read: connection reset by peer
	I1002 20:52:15.972698   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:52:15.972735   84542 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:52:15.972797   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:15.990741   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:15.990956   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:15.990973   84542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:52:16.142437   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:52:16.142511   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.160361   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:16.160564   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:16.160579   84542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:52:16.302266   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:52:16.302296   84542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:52:16.302313   84542 ubuntu.go:190] setting up certificates
	I1002 20:52:16.302320   84542 provision.go:84] configureAuth start
	I1002 20:52:16.302377   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:16.319377   84542 provision.go:143] copyHostCerts
	I1002 20:52:16.319413   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:52:16.319443   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:52:16.319461   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:52:16.319537   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:52:16.319672   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:52:16.319702   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:52:16.319712   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:52:16.319756   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:52:16.319824   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:52:16.319866   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:52:16.319876   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:52:16.319916   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:52:16.319986   84542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:52:16.769818   84542 provision.go:177] copyRemoteCerts
	I1002 20:52:16.769886   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:52:16.769928   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.787463   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:16.887704   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:52:16.887767   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:52:16.904272   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:52:16.904329   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 20:52:16.920641   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:52:16.920707   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:52:16.936967   84542 provision.go:87] duration metric: took 634.632967ms to configureAuth
	I1002 20:52:16.937003   84542 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:52:16.937196   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:16.937308   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.955017   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:16.955246   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:16.955275   84542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:52:17.205259   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:52:17.205285   84542 machine.go:96] duration metric: took 4.395755954s to provisionDockerMachine
	I1002 20:52:17.205299   84542 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:52:17.205312   84542 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:52:17.205377   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:52:17.205412   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.223368   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.323770   84542 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:52:17.327504   84542 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:52:17.327529   84542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:52:17.327540   84542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:52:17.327579   84542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:52:17.327672   84542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:52:17.327683   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:52:17.327765   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:52:17.335362   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:52:17.351623   84542 start.go:297] duration metric: took 146.311149ms for postStartSetup
	I1002 20:52:17.351719   84542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:52:17.351772   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.369784   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.467957   84542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:52:17.472383   84542 fix.go:57] duration metric: took 4.945089023s for fixHost
	I1002 20:52:17.472411   84542 start.go:84] releasing machines lock for "ha-872795", held for 4.94513852s
	I1002 20:52:17.472467   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:17.489531   84542 ssh_runner.go:195] Run: cat /version.json
	I1002 20:52:17.489572   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.489612   84542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:52:17.489672   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.507746   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.508356   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.604764   84542 ssh_runner.go:195] Run: systemctl --version
	I1002 20:52:17.660345   84542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:52:17.693619   84542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:52:17.698130   84542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:52:17.698182   84542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:52:17.705758   84542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:52:17.705779   84542 start.go:496] detecting cgroup driver to use...
	I1002 20:52:17.705811   84542 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:52:17.705857   84542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:52:17.719313   84542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:52:17.730883   84542 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:52:17.730937   84542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:52:17.744989   84542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:52:17.757099   84542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:52:17.831778   84542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:52:17.908793   84542 docker.go:234] disabling docker service ...
	I1002 20:52:17.908841   84542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:52:17.922667   84542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:52:17.934489   84542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:52:18.017207   84542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:52:18.095150   84542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:52:18.107492   84542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:52:18.121597   84542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:52:18.121673   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.130616   84542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:52:18.130710   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.139375   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.148104   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.156885   84542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:52:18.164947   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.173732   84542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.182183   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.191547   84542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:52:18.199437   84542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:52:18.206383   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:18.282056   84542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:52:18.382052   84542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:52:18.382107   84542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:52:18.385801   84542 start.go:564] Will wait 60s for crictl version
	I1002 20:52:18.385851   84542 ssh_runner.go:195] Run: which crictl
	I1002 20:52:18.389097   84542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:52:18.412774   84542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:52:18.412858   84542 ssh_runner.go:195] Run: crio --version
	I1002 20:52:18.439483   84542 ssh_runner.go:195] Run: crio --version
	I1002 20:52:18.467303   84542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:52:18.468633   84542 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:52:18.485148   84542 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:52:18.489207   84542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:52:18.499465   84542 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:52:18.499579   84542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:52:18.499630   84542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:52:18.530560   84542 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:52:18.530580   84542 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:52:18.530619   84542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:52:18.555058   84542 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:52:18.555079   84542 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:52:18.555086   84542 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:52:18.555178   84542 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:52:18.555236   84542 ssh_runner.go:195] Run: crio config
	I1002 20:52:18.597955   84542 cni.go:84] Creating CNI manager for ""
	I1002 20:52:18.597975   84542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:52:18.597996   84542 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:52:18.598014   84542 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:52:18.598135   84542 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:52:18.598204   84542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:52:18.606091   84542 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:52:18.606154   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:52:18.613510   84542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:52:18.625264   84542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:52:18.636674   84542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:52:18.648668   84542 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:52:18.652199   84542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:52:18.661567   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:18.736767   84542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:52:18.757803   84542 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:52:18.757823   84542 certs.go:195] generating shared ca certs ...
	I1002 20:52:18.757838   84542 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:18.757992   84542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:52:18.758045   84542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:52:18.758057   84542 certs.go:257] generating profile certs ...
	I1002 20:52:18.758171   84542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:52:18.758242   84542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4
	I1002 20:52:18.758293   84542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:52:18.758306   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:52:18.758320   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:52:18.758339   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:52:18.758358   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:52:18.758374   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:52:18.758391   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:52:18.758406   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:52:18.758423   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:52:18.758486   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:52:18.758524   84542 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:52:18.758537   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:52:18.758570   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:52:18.758608   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:52:18.758638   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:52:18.758717   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:52:18.758756   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:52:18.758777   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.758793   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:52:18.759515   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:52:18.777064   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:52:18.794759   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:52:18.812947   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:52:18.834586   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 20:52:18.852127   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:52:18.867998   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:52:18.884379   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:52:18.900378   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:52:18.916888   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:52:18.933083   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:52:18.950026   84542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:52:18.961812   84542 ssh_runner.go:195] Run: openssl version
	I1002 20:52:18.967585   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:52:18.975573   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.979135   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.979186   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:19.012717   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:52:19.020807   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:52:19.029221   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.032921   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.032976   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.066315   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:52:19.074461   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:52:19.082874   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.086359   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.086398   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.120256   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:52:19.128343   84542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:52:19.131926   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:52:19.165248   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:52:19.198547   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:52:19.231870   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:52:19.270733   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:52:19.308097   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:52:19.350811   84542 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:19.350914   84542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:52:19.350967   84542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:52:19.377617   84542 cri.go:89] found id: ""
	I1002 20:52:19.377716   84542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:52:19.385510   84542 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:52:19.385528   84542 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:52:19.385564   84542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:52:19.392672   84542 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:52:19.393125   84542 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:19.393254   84542 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "ha-872795" cluster setting kubeconfig missing "ha-872795" context setting]
	I1002 20:52:19.393585   84542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.394226   84542 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:52:19.394732   84542 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:52:19.394755   84542 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:52:19.394766   84542 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:52:19.394772   84542 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:52:19.394777   84542 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:52:19.394827   84542 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:52:19.395209   84542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:52:19.402694   84542 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:52:19.402727   84542 kubeadm.go:601] duration metric: took 17.194012ms to restartPrimaryControlPlane
	I1002 20:52:19.402739   84542 kubeadm.go:402] duration metric: took 51.94088ms to StartCluster
	I1002 20:52:19.402759   84542 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.402828   84542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:19.403515   84542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.403777   84542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:52:19.403833   84542 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:52:19.403924   84542 addons.go:69] Setting storage-provisioner=true in profile "ha-872795"
	I1002 20:52:19.403946   84542 addons.go:238] Setting addon storage-provisioner=true in "ha-872795"
	I1002 20:52:19.403971   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:19.403980   84542 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:19.403941   84542 addons.go:69] Setting default-storageclass=true in profile "ha-872795"
	I1002 20:52:19.404021   84542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-872795"
	I1002 20:52:19.404264   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.404354   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.408264   84542 out.go:179] * Verifying Kubernetes components...
	I1002 20:52:19.409793   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:19.423163   84542 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:52:19.423551   84542 addons.go:238] Setting addon default-storageclass=true in "ha-872795"
	I1002 20:52:19.423620   84542 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:19.424084   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.424808   84542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:52:19.426120   84542 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:52:19.426142   84542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:52:19.426195   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:19.448766   84542 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:52:19.448788   84542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:52:19.448846   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:19.451068   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:19.470398   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:19.516165   84542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:52:19.528726   84542 node_ready.go:35] waiting up to 6m0s for node "ha-872795" to be "Ready" ...
	I1002 20:52:19.561681   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:52:19.574771   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:19.615332   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.615389   84542 retry.go:31] will retry after 249.743741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:19.627513   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.627547   84542 retry.go:31] will retry after 352.813922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.865823   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:19.919409   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.919443   84542 retry.go:31] will retry after 559.091624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.980554   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:20.031881   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.031917   84542 retry.go:31] will retry after 209.83145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.242384   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:20.294555   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.294585   84542 retry.go:31] will retry after 773.589013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.478908   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:20.529665   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.529699   84542 retry.go:31] will retry after 355.05837ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.885227   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:20.936319   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.936345   84542 retry.go:31] will retry after 627.720922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.069211   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:21.121770   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.121807   84542 retry.go:31] will retry after 1.242020524s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:21.529637   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:21.564790   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:21.617241   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.617280   84542 retry.go:31] will retry after 1.30407142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.364852   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:22.417314   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.417351   84542 retry.go:31] will retry after 1.575136446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.921528   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:22.971730   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.971760   84542 retry.go:31] will retry after 2.09594632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:23.530178   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:23.992771   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:24.045329   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:24.045366   84542 retry.go:31] will retry after 2.458367507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:25.068398   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:25.119280   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:25.119306   84542 retry.go:31] will retry after 2.791921669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:25.530272   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:26.504897   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:26.556428   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:26.556454   84542 retry.go:31] will retry after 1.449933818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:27.912150   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:27.963040   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:27.963072   84542 retry.go:31] will retry after 3.952294259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:28.007231   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:28.030134   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:28.059164   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:28.059196   84542 retry.go:31] will retry after 5.898569741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:30.529371   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:31.915686   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:31.966677   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:31.966712   84542 retry.go:31] will retry after 9.505491694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:33.029347   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:33.958860   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:34.011198   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:34.011224   84542 retry.go:31] will retry after 3.955486716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:35.029541   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:37.529312   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:37.967865   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:38.020105   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:38.020135   84542 retry.go:31] will retry after 14.344631794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:39.529637   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:41.472984   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:41.524664   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:41.524701   84542 retry.go:31] will retry after 14.131328473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:41.529983   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:43.530323   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:46.030267   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:48.529270   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:50.530344   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:52.365841   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:52.416707   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:52.416739   84542 retry.go:31] will retry after 8.612648854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:53.030261   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:55.530162   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:55.656412   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:55.708907   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:55.708941   84542 retry.go:31] will retry after 16.863018796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:57.530262   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:00.029774   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:01.029765   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:53:01.082336   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:01.082362   84542 retry.go:31] will retry after 16.45700088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:02.529635   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:04.530102   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:07.029312   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:09.029378   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:11.029761   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:12.572294   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:53:12.623265   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:12.623301   84542 retry.go:31] will retry after 31.20031459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:13.030189   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:15.529409   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:17.529701   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:17.539791   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:53:17.592998   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:17.593031   84542 retry.go:31] will retry after 46.85022317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:19.530271   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:22.029341   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:24.029449   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:26.529475   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:28.529984   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:31.029344   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:33.029703   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:35.030147   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:37.529225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:39.529316   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:41.529348   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:43.529864   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:43.824308   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:53:43.879519   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:43.879556   84542 retry.go:31] will retry after 26.923177778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:46.029215   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:48.030264   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:50.529262   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:53.029201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:55.029266   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:57.529247   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:59.529324   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:01.529385   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:03.530255   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:54:04.443642   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:54:04.494168   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:54:04.494289   84542 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 20:54:06.029363   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:08.030124   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:10.529280   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:54:10.803751   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:54:10.855207   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:54:10.855322   84542 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:54:10.857534   84542 out.go:179] * Enabled addons: 
	I1002 20:54:10.858858   84542 addons.go:514] duration metric: took 1m51.455034236s for enable addons: enabled=[]
	W1002 20:54:12.530223   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:15.030268   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:17.529366   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:19.529680   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:22.029332   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:24.529254   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:26.530225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:28.530316   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:31.030201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:33.530203   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:36.029295   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:38.030258   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:40.530189   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:43.030209   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:45.530056   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:47.530192   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:50.030236   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:52.529213   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:54.530066   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:57.030049   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:59.030131   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:01.030201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:03.530277   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:06.030191   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:08.530048   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:10.530113   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:12.530257   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:15.030055   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:17.030094   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:19.529236   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:21.530190   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:24.030104   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:26.530048   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:28.530115   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:31.030158   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:33.030337   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:35.530143   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:37.530179   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:40.029256   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:42.030319   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:44.530235   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:47.030230   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:49.529240   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:51.530107   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:53.530213   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:56.030066   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:58.030194   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:00.530210   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:03.030252   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:05.530101   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:08.030040   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:10.030200   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:12.530199   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:15.030085   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:17.530117   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:20.030182   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:22.529368   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:24.529529   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:26.529996   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:29.029489   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:31.029783   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:33.529383   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:35.529618   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:37.530223   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:40.029460   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:42.029806   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:44.529381   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:46.529604   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:48.530149   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:51.029550   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:53.030094   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:55.529502   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:57.529987   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:00.029421   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:02.029789   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:04.030225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:06.529307   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:08.529566   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:10.529758   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:12.530115   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:15.030229   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:17.529298   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:19.529553   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:22.029498   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:24.029732   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:26.030184   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:28.529315   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:30.529396   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:32.529569   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:34.529815   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:36.530227   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:39.029287   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:41.029371   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:43.529376   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:46.029375   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:48.029705   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:50.030099   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:52.529285   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:55.029242   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:57.529274   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:59.529506   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:01.529548   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:03.530105   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:06.029214   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:08.029269   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:10.529276   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:13.029280   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:15.029340   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:17.529431   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:58:19.529451   84542 node_ready.go:38] duration metric: took 6m0.000520422s for node "ha-872795" to be "Ready" ...
	I1002 20:58:19.532185   84542 out.go:203] 
	W1002 20:58:19.533451   84542 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:58:19.533467   84542 out.go:285] * 
	* 
	W1002 20:58:19.535106   84542 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:58:19.536199   84542 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-872795 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 84738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:52:12.570713623Z",
	            "FinishedAt": "2025-10-02T20:52:11.285843333Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbe6cbb50399e5abdf61351254e97adbece9a5a3bd792d1e2b031f8c07b08d4b",
	            "SandboxKey": "/var/run/docker/netns/dbe6cbb50399",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:64:7f:41:cf:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "5bcad108b7e4fcc5d99139f5eebb0ef8974d98c9438fae15ef0758e9e96b01c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 2 (281.617904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                    │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                                               │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node start m02 --alsologtostderr -v 5                                              │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │ 02 Oct 25 20:45 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5                                           │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ node    │ ha-872795 node delete m03 --alsologtostderr -v 5                                             │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │ 02 Oct 25 20:52 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:52:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:52:12.352153   84542 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:12.352281   84542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.352291   84542 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:12.352298   84542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.353016   84542 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:52:12.353847   84542 out.go:368] Setting JSON to false
	I1002 20:52:12.354816   84542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5681,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:52:12.354901   84542 start.go:140] virtualization: kvm guest
	I1002 20:52:12.356608   84542 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:52:12.358039   84542 notify.go:221] Checking for updates...
	I1002 20:52:12.358067   84542 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:52:12.359475   84542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:52:12.360841   84542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:12.362132   84542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:52:12.363282   84542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:52:12.364343   84542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:52:12.365896   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:12.366331   84542 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:52:12.389014   84542 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:52:12.389115   84542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:12.440987   84542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:52:12.431594508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:12.441088   84542 docker.go:319] overlay module found
	I1002 20:52:12.443751   84542 out.go:179] * Using the docker driver based on existing profile
	I1002 20:52:12.444967   84542 start.go:306] selected driver: docker
	I1002 20:52:12.444981   84542 start.go:936] validating driver "docker" against &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:12.445063   84542 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:52:12.445136   84542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:12.499692   84542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:52:12.49002335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:12.500567   84542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:52:12.500599   84542 cni.go:84] Creating CNI manager for ""
	I1002 20:52:12.500669   84542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:52:12.500729   84542 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 20:52:12.503553   84542 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:52:12.504787   84542 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:52:12.505884   84542 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:52:12.506921   84542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:52:12.506957   84542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:52:12.506974   84542 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:52:12.506986   84542 cache.go:59] Caching tarball of preloaded images
	I1002 20:52:12.507092   84542 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:52:12.507108   84542 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:52:12.507207   84542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:52:12.527120   84542 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:52:12.527147   84542 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:52:12.527169   84542 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:52:12.527198   84542 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:52:12.527256   84542 start.go:365] duration metric: took 40.003µs to acquireMachinesLock for "ha-872795"
	I1002 20:52:12.527279   84542 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:52:12.527287   84542 fix.go:55] fixHost starting: 
	I1002 20:52:12.527480   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.544385   84542 fix.go:113] recreateIfNeeded on ha-872795: state=Stopped err=<nil>
	W1002 20:52:12.544415   84542 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:52:12.546060   84542 out.go:252] * Restarting existing docker container for "ha-872795" ...
	I1002 20:52:12.546129   84542 cli_runner.go:164] Run: docker start ha-872795
	I1002 20:52:12.772245   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.791338   84542 kic.go:430] container "ha-872795" state is running.
	I1002 20:52:12.791742   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:12.809326   84542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:52:12.809517   84542 machine.go:93] provisionDockerMachine start ...
	I1002 20:52:12.809567   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:12.827593   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:12.827887   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:12.827902   84542 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:52:12.828625   84542 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54168->127.0.0.1:32793: read: connection reset by peer
	I1002 20:52:15.972698   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:52:15.972735   84542 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:52:15.972797   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:15.990741   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:15.990956   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:15.990973   84542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:52:16.142437   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:52:16.142511   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.160361   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:16.160564   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:16.160579   84542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:52:16.302266   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:52:16.302296   84542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:52:16.302313   84542 ubuntu.go:190] setting up certificates
	I1002 20:52:16.302320   84542 provision.go:84] configureAuth start
	I1002 20:52:16.302377   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:16.319377   84542 provision.go:143] copyHostCerts
	I1002 20:52:16.319413   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:52:16.319443   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:52:16.319461   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:52:16.319537   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:52:16.319672   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:52:16.319702   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:52:16.319712   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:52:16.319756   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:52:16.319824   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:52:16.319866   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:52:16.319876   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:52:16.319916   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:52:16.319986   84542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:52:16.769818   84542 provision.go:177] copyRemoteCerts
	I1002 20:52:16.769886   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:52:16.769928   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.787463   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:16.887704   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:52:16.887767   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:52:16.904272   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:52:16.904329   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 20:52:16.920641   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:52:16.920707   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:52:16.936967   84542 provision.go:87] duration metric: took 634.632967ms to configureAuth
	I1002 20:52:16.937003   84542 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:52:16.937196   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:16.937308   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.955017   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:16.955246   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:16.955275   84542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:52:17.205259   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:52:17.205285   84542 machine.go:96] duration metric: took 4.395755954s to provisionDockerMachine
	I1002 20:52:17.205299   84542 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:52:17.205312   84542 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:52:17.205377   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:52:17.205412   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.223368   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.323770   84542 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:52:17.327504   84542 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:52:17.327529   84542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:52:17.327540   84542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:52:17.327579   84542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:52:17.327672   84542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:52:17.327683   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:52:17.327765   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:52:17.335362   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:52:17.351623   84542 start.go:297] duration metric: took 146.311149ms for postStartSetup
	I1002 20:52:17.351719   84542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:52:17.351772   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.369784   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.467957   84542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:52:17.472383   84542 fix.go:57] duration metric: took 4.945089023s for fixHost
	I1002 20:52:17.472411   84542 start.go:84] releasing machines lock for "ha-872795", held for 4.94513852s
	I1002 20:52:17.472467   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:17.489531   84542 ssh_runner.go:195] Run: cat /version.json
	I1002 20:52:17.489572   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.489612   84542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:52:17.489672   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.507746   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.508356   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.604764   84542 ssh_runner.go:195] Run: systemctl --version
	I1002 20:52:17.660345   84542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:52:17.693619   84542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:52:17.698130   84542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:52:17.698182   84542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:52:17.705758   84542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:52:17.705779   84542 start.go:496] detecting cgroup driver to use...
	I1002 20:52:17.705811   84542 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:52:17.705857   84542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:52:17.719313   84542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:52:17.730883   84542 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:52:17.730937   84542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:52:17.744989   84542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:52:17.757099   84542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:52:17.831778   84542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:52:17.908793   84542 docker.go:234] disabling docker service ...
	I1002 20:52:17.908841   84542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:52:17.922667   84542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:52:17.934489   84542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:52:18.017207   84542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:52:18.095150   84542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:52:18.107492   84542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:52:18.121597   84542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:52:18.121673   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.130616   84542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:52:18.130710   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.139375   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.148104   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.156885   84542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:52:18.164947   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.173732   84542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.182183   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.191547   84542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:52:18.199437   84542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:52:18.206383   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:18.282056   84542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:52:18.382052   84542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:52:18.382107   84542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:52:18.385801   84542 start.go:564] Will wait 60s for crictl version
	I1002 20:52:18.385851   84542 ssh_runner.go:195] Run: which crictl
	I1002 20:52:18.389097   84542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:52:18.412774   84542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:52:18.412858   84542 ssh_runner.go:195] Run: crio --version
	I1002 20:52:18.439483   84542 ssh_runner.go:195] Run: crio --version
	I1002 20:52:18.467303   84542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:52:18.468633   84542 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:52:18.485148   84542 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:52:18.489207   84542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:52:18.499465   84542 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:52:18.499579   84542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:52:18.499630   84542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:52:18.530560   84542 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:52:18.530580   84542 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:52:18.530619   84542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:52:18.555058   84542 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:52:18.555079   84542 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:52:18.555086   84542 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:52:18.555178   84542 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:52:18.555236   84542 ssh_runner.go:195] Run: crio config
	I1002 20:52:18.597955   84542 cni.go:84] Creating CNI manager for ""
	I1002 20:52:18.597975   84542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:52:18.597996   84542 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:52:18.598014   84542 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:52:18.598135   84542 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:52:18.598204   84542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:52:18.606091   84542 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:52:18.606154   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:52:18.613510   84542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:52:18.625264   84542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:52:18.636674   84542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:52:18.648668   84542 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:52:18.652199   84542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:52:18.661567   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:18.736767   84542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:52:18.757803   84542 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:52:18.757823   84542 certs.go:195] generating shared ca certs ...
	I1002 20:52:18.757838   84542 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:18.757992   84542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:52:18.758045   84542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:52:18.758057   84542 certs.go:257] generating profile certs ...
	I1002 20:52:18.758171   84542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:52:18.758242   84542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4
	I1002 20:52:18.758293   84542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:52:18.758306   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:52:18.758320   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:52:18.758339   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:52:18.758358   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:52:18.758374   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:52:18.758391   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:52:18.758406   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:52:18.758423   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:52:18.758486   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:52:18.758524   84542 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:52:18.758537   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:52:18.758570   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:52:18.758608   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:52:18.758638   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:52:18.758717   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:52:18.758756   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:52:18.758777   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.758793   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:52:18.759515   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:52:18.777064   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:52:18.794759   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:52:18.812947   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:52:18.834586   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 20:52:18.852127   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:52:18.867998   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:52:18.884379   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:52:18.900378   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:52:18.916888   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:52:18.933083   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:52:18.950026   84542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:52:18.961812   84542 ssh_runner.go:195] Run: openssl version
	I1002 20:52:18.967585   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:52:18.975573   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.979135   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.979186   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:19.012717   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:52:19.020807   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:52:19.029221   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.032921   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.032976   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.066315   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:52:19.074461   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:52:19.082874   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.086359   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.086398   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.120256   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:52:19.128343   84542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:52:19.131926   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:52:19.165248   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:52:19.198547   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:52:19.231870   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:52:19.270733   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:52:19.308097   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:52:19.350811   84542 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:19.350914   84542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:52:19.350967   84542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:52:19.377617   84542 cri.go:89] found id: ""
	I1002 20:52:19.377716   84542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:52:19.385510   84542 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:52:19.385528   84542 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:52:19.385564   84542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:52:19.392672   84542 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:52:19.393125   84542 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:19.393254   84542 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "ha-872795" cluster setting kubeconfig missing "ha-872795" context setting]
	I1002 20:52:19.393585   84542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.394226   84542 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:52:19.394732   84542 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:52:19.394755   84542 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:52:19.394766   84542 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:52:19.394772   84542 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:52:19.394777   84542 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:52:19.394827   84542 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:52:19.395209   84542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:52:19.402694   84542 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:52:19.402727   84542 kubeadm.go:601] duration metric: took 17.194012ms to restartPrimaryControlPlane
	I1002 20:52:19.402739   84542 kubeadm.go:402] duration metric: took 51.94088ms to StartCluster
	I1002 20:52:19.402759   84542 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.402828   84542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:19.403515   84542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.403777   84542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:52:19.403833   84542 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:52:19.403924   84542 addons.go:69] Setting storage-provisioner=true in profile "ha-872795"
	I1002 20:52:19.403946   84542 addons.go:238] Setting addon storage-provisioner=true in "ha-872795"
	I1002 20:52:19.403971   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:19.403980   84542 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:19.403941   84542 addons.go:69] Setting default-storageclass=true in profile "ha-872795"
	I1002 20:52:19.404021   84542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-872795"
	I1002 20:52:19.404264   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.404354   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.408264   84542 out.go:179] * Verifying Kubernetes components...
	I1002 20:52:19.409793   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:19.423163   84542 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:52:19.423551   84542 addons.go:238] Setting addon default-storageclass=true in "ha-872795"
	I1002 20:52:19.423620   84542 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:19.424084   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.424808   84542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:52:19.426120   84542 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:52:19.426142   84542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:52:19.426195   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:19.448766   84542 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:52:19.448788   84542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:52:19.448846   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:19.451068   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:19.470398   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:19.516165   84542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:52:19.528726   84542 node_ready.go:35] waiting up to 6m0s for node "ha-872795" to be "Ready" ...
	I1002 20:52:19.561681   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:52:19.574771   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:19.615332   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.615389   84542 retry.go:31] will retry after 249.743741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:19.627513   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.627547   84542 retry.go:31] will retry after 352.813922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.865823   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:19.919409   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.919443   84542 retry.go:31] will retry after 559.091624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.980554   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:20.031881   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.031917   84542 retry.go:31] will retry after 209.83145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.242384   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:20.294555   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.294585   84542 retry.go:31] will retry after 773.589013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.478908   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:20.529665   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.529699   84542 retry.go:31] will retry after 355.05837ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.885227   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:20.936319   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.936345   84542 retry.go:31] will retry after 627.720922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.069211   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:21.121770   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.121807   84542 retry.go:31] will retry after 1.242020524s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:21.529637   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:21.564790   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:21.617241   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.617280   84542 retry.go:31] will retry after 1.30407142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.364852   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:22.417314   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.417351   84542 retry.go:31] will retry after 1.575136446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.921528   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:22.971730   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.971760   84542 retry.go:31] will retry after 2.09594632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:23.530178   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:23.992771   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:24.045329   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:24.045366   84542 retry.go:31] will retry after 2.458367507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:25.068398   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:25.119280   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:25.119306   84542 retry.go:31] will retry after 2.791921669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:25.530272   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:26.504897   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:26.556428   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:26.556454   84542 retry.go:31] will retry after 1.449933818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:27.912150   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:27.963040   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:27.963072   84542 retry.go:31] will retry after 3.952294259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:28.007231   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:28.030134   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:28.059164   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:28.059196   84542 retry.go:31] will retry after 5.898569741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:30.529371   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:31.915686   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:31.966677   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:31.966712   84542 retry.go:31] will retry after 9.505491694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:33.029347   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:33.958860   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:34.011198   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:34.011224   84542 retry.go:31] will retry after 3.955486716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:35.029541   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:37.529312   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:37.967865   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:38.020105   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:38.020135   84542 retry.go:31] will retry after 14.344631794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:39.529637   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:41.472984   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:41.524664   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:41.524701   84542 retry.go:31] will retry after 14.131328473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:41.529983   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:43.530323   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:46.030267   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:48.529270   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:50.530344   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:52.365841   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:52.416707   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:52.416739   84542 retry.go:31] will retry after 8.612648854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:53.030261   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:55.530162   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:55.656412   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:55.708907   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:55.708941   84542 retry.go:31] will retry after 16.863018796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:57.530262   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:00.029774   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:01.029765   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:53:01.082336   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:01.082362   84542 retry.go:31] will retry after 16.45700088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:02.529635   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:04.530102   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:07.029312   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:09.029378   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:11.029761   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:12.572294   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:53:12.623265   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:12.623301   84542 retry.go:31] will retry after 31.20031459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:13.030189   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:15.529409   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:17.529701   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:17.539791   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:53:17.592998   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:17.593031   84542 retry.go:31] will retry after 46.85022317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:19.530271   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:22.029341   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:24.029449   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:26.529475   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:28.529984   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:31.029344   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:33.029703   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:35.030147   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:37.529225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:39.529316   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:41.529348   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:43.529864   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:43.824308   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:53:43.879519   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:43.879556   84542 retry.go:31] will retry after 26.923177778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:46.029215   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:48.030264   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:50.529262   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:53.029201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:55.029266   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:57.529247   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:59.529324   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:01.529385   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:03.530255   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:54:04.443642   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:54:04.494168   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:54:04.494289   84542 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 20:54:06.029363   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:08.030124   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:10.529280   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:54:10.803751   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:54:10.855207   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:54:10.855322   84542 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:54:10.857534   84542 out.go:179] * Enabled addons: 
	I1002 20:54:10.858858   84542 addons.go:514] duration metric: took 1m51.455034236s for enable addons: enabled=[]
	W1002 20:54:12.530223   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:15.030268   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:17.529366   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:19.529680   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:22.029332   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:24.529254   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:26.530225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:28.530316   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:31.030201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:33.530203   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:36.029295   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:38.030258   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:40.530189   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:43.030209   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:45.530056   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:47.530192   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:50.030236   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:52.529213   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:54.530066   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:57.030049   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:59.030131   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:01.030201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:03.530277   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:06.030191   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:08.530048   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:10.530113   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:12.530257   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:15.030055   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:17.030094   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:19.529236   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:21.530190   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:24.030104   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:26.530048   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:28.530115   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:31.030158   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:33.030337   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:35.530143   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:37.530179   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:40.029256   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:42.030319   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:44.530235   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:47.030230   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:49.529240   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:51.530107   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:53.530213   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:56.030066   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:58.030194   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:00.530210   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:03.030252   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:05.530101   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:08.030040   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:10.030200   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:12.530199   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:15.030085   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:17.530117   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:20.030182   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:22.529368   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:24.529529   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:26.529996   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:29.029489   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:31.029783   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:33.529383   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:35.529618   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:37.530223   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:40.029460   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:42.029806   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:44.529381   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:46.529604   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:48.530149   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:51.029550   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:53.030094   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:55.529502   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:57.529987   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:00.029421   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:02.029789   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:04.030225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:06.529307   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:08.529566   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:10.529758   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:12.530115   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:15.030229   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:17.529298   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:19.529553   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:22.029498   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:24.029732   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:26.030184   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:28.529315   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:30.529396   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:32.529569   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:34.529815   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:36.530227   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:39.029287   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:41.029371   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:43.529376   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:46.029375   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:48.029705   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:50.030099   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:52.529285   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:55.029242   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:57.529274   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:59.529506   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:01.529548   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:03.530105   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:06.029214   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:08.029269   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:10.529276   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:13.029280   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:15.029340   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:17.529431   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:58:19.529451   84542 node_ready.go:38] duration metric: took 6m0.000520422s for node "ha-872795" to be "Ready" ...
	I1002 20:58:19.532185   84542 out.go:203] 
	W1002 20:58:19.533451   84542 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:58:19.533467   84542 out.go:285] * 
	W1002 20:58:19.535106   84542 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:58:19.536199   84542 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:58:12 ha-872795 crio[522]: time="2025-10-02T20:58:12.866033528Z" level=info msg="createCtr: removing container bb8b90a100adb20f4c6c96b4f4ca0b28a316c38d7931ddf76f9d033ac1cfb30c" id=c7d16883-f57e-4785-9dd4-3b6ddca80ae3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:12 ha-872795 crio[522]: time="2025-10-02T20:58:12.866062481Z" level=info msg="createCtr: deleting container bb8b90a100adb20f4c6c96b4f4ca0b28a316c38d7931ddf76f9d033ac1cfb30c from storage" id=c7d16883-f57e-4785-9dd4-3b6ddca80ae3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:12 ha-872795 crio[522]: time="2025-10-02T20:58:12.86794839Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=c7d16883-f57e-4785-9dd4-3b6ddca80ae3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.843856193Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=618a4ce7-4bb3-491a-a8fb-f40243eaa9aa name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.844712532Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c4fe3b99-bd97-49db-aee3-63b5145bee37 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.845555854Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-872795/kube-apiserver" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.845824529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.850156484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.850708589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.86454647Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.865889543Z" level=info msg="createCtr: deleting container ID 26b741597e0e06c13ef99306cb62623e7932b12637d8672cd7ecf500ecfa352b from idIndex" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.865924233Z" level=info msg="createCtr: removing container 26b741597e0e06c13ef99306cb62623e7932b12637d8672cd7ecf500ecfa352b" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.865962224Z" level=info msg="createCtr: deleting container 26b741597e0e06c13ef99306cb62623e7932b12637d8672cd7ecf500ecfa352b from storage" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.86783495Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.843775999Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6b82fe88-17ec-4c57-9b05-c19050877732 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.844719915Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=abcd93c5-df4d-45f8-92bb-46d6ca77b31e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.845470274Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-872795/kube-scheduler" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.84568952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.848829819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.849375846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.865375235Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.866624528Z" level=info msg="createCtr: deleting container ID ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a from idIndex" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.866669375Z" level=info msg="createCtr: removing container ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.86670509Z" level=info msg="createCtr: deleting container ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a from storage" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.868605064Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:20.403385    2009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:20.403911    2009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:20.405431    2009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:20.405818    2009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:20.407412    2009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:58:20 up  1:40,  0 user,  load average: 0.00, 0.07, 0.13
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:58:12 ha-872795 kubelet[671]: E1002 20:58:12.868301     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:12 ha-872795 kubelet[671]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-872795_kube-system(bb93bd54c951d044e2ddbaf0dd48a41c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:12 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:12 ha-872795 kubelet[671]: E1002 20:58:12.868334     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:58:13 ha-872795 kubelet[671]: E1002 20:58:13.126088     671 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:58:14 ha-872795 kubelet[671]: E1002 20:58:14.480735     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:58:14 ha-872795 kubelet[671]: I1002 20:58:14.649734     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:58:14 ha-872795 kubelet[671]: E1002 20:58:14.650105     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.843416     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.868095     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:58:16 ha-872795 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:16 ha-872795 kubelet[671]:  > podSandboxID="882c47de7209ee0ba716a6023c26fffe30919d4843ca2e421dafbefd6c9534da"
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.868180     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:16 ha-872795 kubelet[671]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:16 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.868209     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.843341     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.868908     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:58:17 ha-872795 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:17 ha-872795 kubelet[671]:  > podSandboxID="7a6998e86547c4fc510950e02f70bd6ee0f981ac1b56d6bfea37794d1ce0aad6"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.869005     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:17 ha-872795 kubelet[671]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:17 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.869035     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	Oct 02 20:58:18 ha-872795 kubelet[671]: E1002 20:58:18.858006     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 2 (283.325015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (368.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-872795" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-872795\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-872795\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-872795\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 84738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:52:12.570713623Z",
	            "FinishedAt": "2025-10-02T20:52:11.285843333Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbe6cbb50399e5abdf61351254e97adbece9a5a3bd792d1e2b031f8c07b08d4b",
	            "SandboxKey": "/var/run/docker/netns/dbe6cbb50399",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:64:7f:41:cf:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "5bcad108b7e4fcc5d99139f5eebb0ef8974d98c9438fae15ef0758e9e96b01c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 2 (277.877406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                    │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                                               │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node start m02 --alsologtostderr -v 5                                              │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │ 02 Oct 25 20:45 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5                                           │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ node    │ ha-872795 node delete m03 --alsologtostderr -v 5                                             │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │ 02 Oct 25 20:52 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:52:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:52:12.352153   84542 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:12.352281   84542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.352291   84542 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:12.352298   84542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.353016   84542 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:52:12.353847   84542 out.go:368] Setting JSON to false
	I1002 20:52:12.354816   84542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5681,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:52:12.354901   84542 start.go:140] virtualization: kvm guest
	I1002 20:52:12.356608   84542 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:52:12.358039   84542 notify.go:221] Checking for updates...
	I1002 20:52:12.358067   84542 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:52:12.359475   84542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:52:12.360841   84542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:12.362132   84542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:52:12.363282   84542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:52:12.364343   84542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:52:12.365896   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:12.366331   84542 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:52:12.389014   84542 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:52:12.389115   84542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:12.440987   84542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:52:12.431594508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:12.441088   84542 docker.go:319] overlay module found
	I1002 20:52:12.443751   84542 out.go:179] * Using the docker driver based on existing profile
	I1002 20:52:12.444967   84542 start.go:306] selected driver: docker
	I1002 20:52:12.444981   84542 start.go:936] validating driver "docker" against &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:12.445063   84542 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:52:12.445136   84542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:12.499692   84542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:52:12.49002335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:12.500567   84542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:52:12.500599   84542 cni.go:84] Creating CNI manager for ""
	I1002 20:52:12.500669   84542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:52:12.500729   84542 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 20:52:12.503553   84542 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:52:12.504787   84542 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:52:12.505884   84542 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:52:12.506921   84542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:52:12.506957   84542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:52:12.506974   84542 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:52:12.506986   84542 cache.go:59] Caching tarball of preloaded images
	I1002 20:52:12.507092   84542 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:52:12.507108   84542 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:52:12.507207   84542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:52:12.527120   84542 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:52:12.527147   84542 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:52:12.527169   84542 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:52:12.527198   84542 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:52:12.527256   84542 start.go:365] duration metric: took 40.003µs to acquireMachinesLock for "ha-872795"
	I1002 20:52:12.527279   84542 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:52:12.527287   84542 fix.go:55] fixHost starting: 
	I1002 20:52:12.527480   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.544385   84542 fix.go:113] recreateIfNeeded on ha-872795: state=Stopped err=<nil>
	W1002 20:52:12.544415   84542 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:52:12.546060   84542 out.go:252] * Restarting existing docker container for "ha-872795" ...
	I1002 20:52:12.546129   84542 cli_runner.go:164] Run: docker start ha-872795
	I1002 20:52:12.772245   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.791338   84542 kic.go:430] container "ha-872795" state is running.
	I1002 20:52:12.791742   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:12.809326   84542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:52:12.809517   84542 machine.go:93] provisionDockerMachine start ...
	I1002 20:52:12.809567   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:12.827593   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:12.827887   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:12.827902   84542 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:52:12.828625   84542 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54168->127.0.0.1:32793: read: connection reset by peer
	I1002 20:52:15.972698   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:52:15.972735   84542 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:52:15.972797   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:15.990741   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:15.990956   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:15.990973   84542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:52:16.142437   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:52:16.142511   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.160361   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:16.160564   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:16.160579   84542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:52:16.302266   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:52:16.302296   84542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:52:16.302313   84542 ubuntu.go:190] setting up certificates
	I1002 20:52:16.302320   84542 provision.go:84] configureAuth start
	I1002 20:52:16.302377   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:16.319377   84542 provision.go:143] copyHostCerts
	I1002 20:52:16.319413   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:52:16.319443   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:52:16.319461   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:52:16.319537   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:52:16.319672   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:52:16.319702   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:52:16.319712   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:52:16.319756   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:52:16.319824   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:52:16.319866   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:52:16.319876   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:52:16.319916   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:52:16.319986   84542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:52:16.769818   84542 provision.go:177] copyRemoteCerts
	I1002 20:52:16.769886   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:52:16.769928   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.787463   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:16.887704   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:52:16.887767   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:52:16.904272   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:52:16.904329   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 20:52:16.920641   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:52:16.920707   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:52:16.936967   84542 provision.go:87] duration metric: took 634.632967ms to configureAuth
	I1002 20:52:16.937003   84542 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:52:16.937196   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:16.937308   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.955017   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:16.955246   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:16.955275   84542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:52:17.205259   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:52:17.205285   84542 machine.go:96] duration metric: took 4.395755954s to provisionDockerMachine
	I1002 20:52:17.205299   84542 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:52:17.205312   84542 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:52:17.205377   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:52:17.205412   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.223368   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.323770   84542 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:52:17.327504   84542 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:52:17.327529   84542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:52:17.327540   84542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:52:17.327579   84542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:52:17.327672   84542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:52:17.327683   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:52:17.327765   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:52:17.335362   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:52:17.351623   84542 start.go:297] duration metric: took 146.311149ms for postStartSetup
	I1002 20:52:17.351719   84542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:52:17.351772   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.369784   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.467957   84542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:52:17.472383   84542 fix.go:57] duration metric: took 4.945089023s for fixHost
	I1002 20:52:17.472411   84542 start.go:84] releasing machines lock for "ha-872795", held for 4.94513852s
	I1002 20:52:17.472467   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:17.489531   84542 ssh_runner.go:195] Run: cat /version.json
	I1002 20:52:17.489572   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.489612   84542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:52:17.489672   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.507746   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.508356   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.604764   84542 ssh_runner.go:195] Run: systemctl --version
	I1002 20:52:17.660345   84542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:52:17.693619   84542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:52:17.698130   84542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:52:17.698182   84542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:52:17.705758   84542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:52:17.705779   84542 start.go:496] detecting cgroup driver to use...
	I1002 20:52:17.705811   84542 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:52:17.705857   84542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:52:17.719313   84542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:52:17.730883   84542 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:52:17.730937   84542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:52:17.744989   84542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:52:17.757099   84542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:52:17.831778   84542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:52:17.908793   84542 docker.go:234] disabling docker service ...
	I1002 20:52:17.908841   84542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:52:17.922667   84542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:52:17.934489   84542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:52:18.017207   84542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:52:18.095150   84542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:52:18.107492   84542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:52:18.121597   84542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:52:18.121673   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.130616   84542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:52:18.130710   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.139375   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.148104   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.156885   84542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:52:18.164947   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.173732   84542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.182183   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.191547   84542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:52:18.199437   84542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:52:18.206383   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:18.282056   84542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:52:18.382052   84542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:52:18.382107   84542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:52:18.385801   84542 start.go:564] Will wait 60s for crictl version
	I1002 20:52:18.385851   84542 ssh_runner.go:195] Run: which crictl
	I1002 20:52:18.389097   84542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:52:18.412774   84542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:52:18.412858   84542 ssh_runner.go:195] Run: crio --version
	I1002 20:52:18.439483   84542 ssh_runner.go:195] Run: crio --version
	I1002 20:52:18.467303   84542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:52:18.468633   84542 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:52:18.485148   84542 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:52:18.489207   84542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:52:18.499465   84542 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:52:18.499579   84542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:52:18.499630   84542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:52:18.530560   84542 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:52:18.530580   84542 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:52:18.530619   84542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:52:18.555058   84542 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:52:18.555079   84542 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:52:18.555086   84542 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:52:18.555178   84542 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:52:18.555236   84542 ssh_runner.go:195] Run: crio config
	I1002 20:52:18.597955   84542 cni.go:84] Creating CNI manager for ""
	I1002 20:52:18.597975   84542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:52:18.597996   84542 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:52:18.598014   84542 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:52:18.598135   84542 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:52:18.598204   84542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:52:18.606091   84542 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:52:18.606154   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:52:18.613510   84542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:52:18.625264   84542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:52:18.636674   84542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:52:18.648668   84542 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:52:18.652199   84542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:52:18.661567   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:18.736767   84542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:52:18.757803   84542 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:52:18.757823   84542 certs.go:195] generating shared ca certs ...
	I1002 20:52:18.757838   84542 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:18.757992   84542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:52:18.758045   84542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:52:18.758057   84542 certs.go:257] generating profile certs ...
	I1002 20:52:18.758171   84542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:52:18.758242   84542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4
	I1002 20:52:18.758293   84542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:52:18.758306   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:52:18.758320   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:52:18.758339   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:52:18.758358   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:52:18.758374   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:52:18.758391   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:52:18.758406   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:52:18.758423   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:52:18.758486   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:52:18.758524   84542 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:52:18.758537   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:52:18.758570   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:52:18.758608   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:52:18.758638   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:52:18.758717   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:52:18.758756   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:52:18.758777   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.758793   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:52:18.759515   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:52:18.777064   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:52:18.794759   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:52:18.812947   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:52:18.834586   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 20:52:18.852127   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:52:18.867998   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:52:18.884379   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:52:18.900378   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:52:18.916888   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:52:18.933083   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:52:18.950026   84542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:52:18.961812   84542 ssh_runner.go:195] Run: openssl version
	I1002 20:52:18.967585   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:52:18.975573   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.979135   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.979186   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:19.012717   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:52:19.020807   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:52:19.029221   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.032921   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.032976   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.066315   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:52:19.074461   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:52:19.082874   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.086359   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.086398   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.120256   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:52:19.128343   84542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:52:19.131926   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:52:19.165248   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:52:19.198547   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:52:19.231870   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:52:19.270733   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:52:19.308097   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:52:19.350811   84542 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:19.350914   84542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:52:19.350967   84542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:52:19.377617   84542 cri.go:89] found id: ""
	I1002 20:52:19.377716   84542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:52:19.385510   84542 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:52:19.385528   84542 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:52:19.385564   84542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:52:19.392672   84542 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:52:19.393125   84542 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:19.393254   84542 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "ha-872795" cluster setting kubeconfig missing "ha-872795" context setting]
	I1002 20:52:19.393585   84542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.394226   84542 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:52:19.394732   84542 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:52:19.394755   84542 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:52:19.394766   84542 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:52:19.394772   84542 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:52:19.394777   84542 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:52:19.394827   84542 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:52:19.395209   84542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:52:19.402694   84542 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:52:19.402727   84542 kubeadm.go:601] duration metric: took 17.194012ms to restartPrimaryControlPlane
	I1002 20:52:19.402739   84542 kubeadm.go:402] duration metric: took 51.94088ms to StartCluster
	I1002 20:52:19.402759   84542 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.402828   84542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:19.403515   84542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.403777   84542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:52:19.403833   84542 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:52:19.403924   84542 addons.go:69] Setting storage-provisioner=true in profile "ha-872795"
	I1002 20:52:19.403946   84542 addons.go:238] Setting addon storage-provisioner=true in "ha-872795"
	I1002 20:52:19.403971   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:19.403980   84542 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:19.403941   84542 addons.go:69] Setting default-storageclass=true in profile "ha-872795"
	I1002 20:52:19.404021   84542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-872795"
	I1002 20:52:19.404264   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.404354   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.408264   84542 out.go:179] * Verifying Kubernetes components...
	I1002 20:52:19.409793   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:19.423163   84542 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:52:19.423551   84542 addons.go:238] Setting addon default-storageclass=true in "ha-872795"
	I1002 20:52:19.423620   84542 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:19.424084   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.424808   84542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:52:19.426120   84542 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:52:19.426142   84542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:52:19.426195   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:19.448766   84542 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:52:19.448788   84542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:52:19.448846   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:19.451068   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:19.470398   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:19.516165   84542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:52:19.528726   84542 node_ready.go:35] waiting up to 6m0s for node "ha-872795" to be "Ready" ...
	I1002 20:52:19.561681   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:52:19.574771   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:19.615332   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.615389   84542 retry.go:31] will retry after 249.743741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:19.627513   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.627547   84542 retry.go:31] will retry after 352.813922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.865823   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:19.919409   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.919443   84542 retry.go:31] will retry after 559.091624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.980554   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:20.031881   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.031917   84542 retry.go:31] will retry after 209.83145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.242384   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:20.294555   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.294585   84542 retry.go:31] will retry after 773.589013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.478908   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:20.529665   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.529699   84542 retry.go:31] will retry after 355.05837ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.885227   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:20.936319   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.936345   84542 retry.go:31] will retry after 627.720922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.069211   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:21.121770   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.121807   84542 retry.go:31] will retry after 1.242020524s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:21.529637   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:21.564790   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:21.617241   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.617280   84542 retry.go:31] will retry after 1.30407142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.364852   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:22.417314   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.417351   84542 retry.go:31] will retry after 1.575136446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.921528   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:22.971730   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.971760   84542 retry.go:31] will retry after 2.09594632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:23.530178   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:23.992771   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:24.045329   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:24.045366   84542 retry.go:31] will retry after 2.458367507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:25.068398   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:25.119280   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:25.119306   84542 retry.go:31] will retry after 2.791921669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:25.530272   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:26.504897   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:26.556428   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:26.556454   84542 retry.go:31] will retry after 1.449933818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:27.912150   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:27.963040   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:27.963072   84542 retry.go:31] will retry after 3.952294259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:28.007231   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:28.030134   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:28.059164   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:28.059196   84542 retry.go:31] will retry after 5.898569741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:30.529371   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:31.915686   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:31.966677   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:31.966712   84542 retry.go:31] will retry after 9.505491694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:33.029347   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:33.958860   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:34.011198   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:34.011224   84542 retry.go:31] will retry after 3.955486716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:35.029541   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:37.529312   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:37.967865   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:38.020105   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:38.020135   84542 retry.go:31] will retry after 14.344631794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:39.529637   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:41.472984   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:41.524664   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:41.524701   84542 retry.go:31] will retry after 14.131328473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:41.529983   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:43.530323   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:46.030267   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:48.529270   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:50.530344   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:52.365841   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:52.416707   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:52.416739   84542 retry.go:31] will retry after 8.612648854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:53.030261   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:55.530162   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:55.656412   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:55.708907   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:55.708941   84542 retry.go:31] will retry after 16.863018796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:57.530262   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:00.029774   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:01.029765   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:53:01.082336   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:01.082362   84542 retry.go:31] will retry after 16.45700088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:02.529635   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:04.530102   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:07.029312   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:09.029378   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:11.029761   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:12.572294   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:53:12.623265   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:12.623301   84542 retry.go:31] will retry after 31.20031459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:13.030189   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:15.529409   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:17.529701   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:17.539791   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:53:17.592998   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:17.593031   84542 retry.go:31] will retry after 46.85022317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:19.530271   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:22.029341   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:24.029449   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:26.529475   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:28.529984   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:31.029344   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:33.029703   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:35.030147   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:37.529225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:39.529316   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:41.529348   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:43.529864   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:43.824308   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:53:43.879519   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:43.879556   84542 retry.go:31] will retry after 26.923177778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:46.029215   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:48.030264   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:50.529262   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:53.029201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:55.029266   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:57.529247   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:59.529324   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:01.529385   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:03.530255   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:54:04.443642   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:54:04.494168   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:54:04.494289   84542 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 20:54:06.029363   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:08.030124   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:10.529280   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:54:10.803751   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:54:10.855207   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:54:10.855322   84542 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:54:10.857534   84542 out.go:179] * Enabled addons: 
	I1002 20:54:10.858858   84542 addons.go:514] duration metric: took 1m51.455034236s for enable addons: enabled=[]
	W1002 20:54:12.530223   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:15.030268   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:17.529366   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:19.529680   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:22.029332   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:24.529254   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:26.530225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:28.530316   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:31.030201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:33.530203   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:36.029295   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:38.030258   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:40.530189   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:43.030209   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:45.530056   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:47.530192   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:50.030236   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:52.529213   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:54.530066   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:57.030049   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:59.030131   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:01.030201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:03.530277   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:06.030191   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:08.530048   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:10.530113   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:12.530257   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:15.030055   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:17.030094   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:19.529236   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:21.530190   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:24.030104   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:26.530048   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:28.530115   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:31.030158   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:33.030337   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:35.530143   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:37.530179   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:40.029256   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:42.030319   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:44.530235   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:47.030230   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:49.529240   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:51.530107   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:53.530213   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:56.030066   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:58.030194   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:00.530210   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:03.030252   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:05.530101   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:08.030040   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:10.030200   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:12.530199   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:15.030085   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:17.530117   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:20.030182   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:22.529368   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:24.529529   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:26.529996   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:29.029489   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:31.029783   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:33.529383   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:35.529618   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:37.530223   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:40.029460   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:42.029806   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:44.529381   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:46.529604   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:48.530149   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:51.029550   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:53.030094   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:55.529502   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:57.529987   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:00.029421   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:02.029789   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:04.030225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:06.529307   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:08.529566   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:10.529758   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:12.530115   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:15.030229   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:17.529298   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:19.529553   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:22.029498   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:24.029732   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:26.030184   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:28.529315   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:30.529396   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:32.529569   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:34.529815   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:36.530227   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:39.029287   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:41.029371   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:43.529376   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:46.029375   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:48.029705   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:50.030099   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:52.529285   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:55.029242   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:57.529274   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:59.529506   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:01.529548   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:03.530105   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:06.029214   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:08.029269   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:10.529276   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:13.029280   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:15.029340   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:17.529431   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:58:19.529451   84542 node_ready.go:38] duration metric: took 6m0.000520422s for node "ha-872795" to be "Ready" ...
	I1002 20:58:19.532185   84542 out.go:203] 
	W1002 20:58:19.533451   84542 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:58:19.533467   84542 out.go:285] * 
	W1002 20:58:19.535106   84542 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:58:19.536199   84542 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:58:12 ha-872795 crio[522]: time="2025-10-02T20:58:12.866033528Z" level=info msg="createCtr: removing container bb8b90a100adb20f4c6c96b4f4ca0b28a316c38d7931ddf76f9d033ac1cfb30c" id=c7d16883-f57e-4785-9dd4-3b6ddca80ae3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:12 ha-872795 crio[522]: time="2025-10-02T20:58:12.866062481Z" level=info msg="createCtr: deleting container bb8b90a100adb20f4c6c96b4f4ca0b28a316c38d7931ddf76f9d033ac1cfb30c from storage" id=c7d16883-f57e-4785-9dd4-3b6ddca80ae3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:12 ha-872795 crio[522]: time="2025-10-02T20:58:12.86794839Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-872795_kube-system_bb93bd54c951d044e2ddbaf0dd48a41c_0" id=c7d16883-f57e-4785-9dd4-3b6ddca80ae3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.843856193Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=618a4ce7-4bb3-491a-a8fb-f40243eaa9aa name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.844712532Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c4fe3b99-bd97-49db-aee3-63b5145bee37 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.845555854Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-872795/kube-apiserver" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.845824529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.850156484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.850708589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.86454647Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.865889543Z" level=info msg="createCtr: deleting container ID 26b741597e0e06c13ef99306cb62623e7932b12637d8672cd7ecf500ecfa352b from idIndex" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.865924233Z" level=info msg="createCtr: removing container 26b741597e0e06c13ef99306cb62623e7932b12637d8672cd7ecf500ecfa352b" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.865962224Z" level=info msg="createCtr: deleting container 26b741597e0e06c13ef99306cb62623e7932b12637d8672cd7ecf500ecfa352b from storage" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.86783495Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.843775999Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6b82fe88-17ec-4c57-9b05-c19050877732 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.844719915Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=abcd93c5-df4d-45f8-92bb-46d6ca77b31e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.845470274Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-872795/kube-scheduler" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.84568952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.848829819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.849375846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.865375235Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.866624528Z" level=info msg="createCtr: deleting container ID ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a from idIndex" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.866669375Z" level=info msg="createCtr: removing container ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.86670509Z" level=info msg="createCtr: deleting container ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a from storage" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.868605064Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:21.894613    2185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:21.895170    2185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:21.896756    2185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:21.897160    2185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:21.898604    2185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:58:21 up  1:40,  0 user,  load average: 0.00, 0.07, 0.13
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:58:12 ha-872795 kubelet[671]: E1002 20:58:12.868334     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-872795" podUID="bb93bd54c951d044e2ddbaf0dd48a41c"
	Oct 02 20:58:13 ha-872795 kubelet[671]: E1002 20:58:13.126088     671 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:58:14 ha-872795 kubelet[671]: E1002 20:58:14.480735     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:58:14 ha-872795 kubelet[671]: I1002 20:58:14.649734     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:58:14 ha-872795 kubelet[671]: E1002 20:58:14.650105     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.843416     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.868095     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:58:16 ha-872795 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:16 ha-872795 kubelet[671]:  > podSandboxID="882c47de7209ee0ba716a6023c26fffe30919d4843ca2e421dafbefd6c9534da"
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.868180     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:16 ha-872795 kubelet[671]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:16 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.868209     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.843341     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.868908     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:58:17 ha-872795 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:17 ha-872795 kubelet[671]:  > podSandboxID="7a6998e86547c4fc510950e02f70bd6ee0f981ac1b56d6bfea37794d1ce0aad6"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.869005     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:17 ha-872795 kubelet[671]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:17 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.869035     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	Oct 02 20:58:18 ha-872795 kubelet[671]: E1002 20:58:18.858006     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	Oct 02 20:58:21 ha-872795 kubelet[671]: E1002 20:58:21.481236     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:58:21 ha-872795 kubelet[671]: I1002 20:58:21.651338     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:58:21 ha-872795 kubelet[671]: E1002 20:58:21.651709     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 2 (278.064836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-872795 node add --control-plane --alsologtostderr -v 5: exit status 103 (236.152424ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-872795 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-872795"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:58:22.298021   89192 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:58:22.298277   89192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:58:22.298287   89192 out.go:374] Setting ErrFile to fd 2...
	I1002 20:58:22.298291   89192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:58:22.298482   89192 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:58:22.298773   89192 mustload.go:65] Loading cluster: ha-872795
	I1002 20:58:22.299089   89192 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:58:22.299452   89192 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:58:22.316112   89192 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:58:22.316361   89192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:58:22.365403   89192 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:58:22.356584688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:58:22.365503   89192 api_server.go:166] Checking apiserver status ...
	I1002 20:58:22.365542   89192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:22.365578   89192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:58:22.383175   89192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	W1002 20:58:22.486245   89192 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:58:22.488165   89192 out.go:179] * The control-plane node ha-872795 apiserver is not running: (state=Stopped)
	I1002 20:58:22.489498   89192 out.go:179]   To start a cluster, run: "minikube start -p ha-872795"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-872795 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 84738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:52:12.570713623Z",
	            "FinishedAt": "2025-10-02T20:52:11.285843333Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbe6cbb50399e5abdf61351254e97adbece9a5a3bd792d1e2b031f8c07b08d4b",
	            "SandboxKey": "/var/run/docker/netns/dbe6cbb50399",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:64:7f:41:cf:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "5bcad108b7e4fcc5d99139f5eebb0ef8974d98c9438fae15ef0758e9e96b01c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 2 (276.999981ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                    │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                                               │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node start m02 --alsologtostderr -v 5                                              │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │ 02 Oct 25 20:45 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5                                           │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ node    │ ha-872795 node delete m03 --alsologtostderr -v 5                                             │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │ 02 Oct 25 20:52 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ node    │ ha-872795 node add --control-plane --alsologtostderr -v 5                                    │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:52:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:52:12.352153   84542 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:12.352281   84542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.352291   84542 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:12.352298   84542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.353016   84542 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:52:12.353847   84542 out.go:368] Setting JSON to false
	I1002 20:52:12.354816   84542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5681,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:52:12.354901   84542 start.go:140] virtualization: kvm guest
	I1002 20:52:12.356608   84542 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:52:12.358039   84542 notify.go:221] Checking for updates...
	I1002 20:52:12.358067   84542 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:52:12.359475   84542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:52:12.360841   84542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:12.362132   84542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:52:12.363282   84542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:52:12.364343   84542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:52:12.365896   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:12.366331   84542 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:52:12.389014   84542 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:52:12.389115   84542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:12.440987   84542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:52:12.431594508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:12.441088   84542 docker.go:319] overlay module found
	I1002 20:52:12.443751   84542 out.go:179] * Using the docker driver based on existing profile
	I1002 20:52:12.444967   84542 start.go:306] selected driver: docker
	I1002 20:52:12.444981   84542 start.go:936] validating driver "docker" against &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:12.445063   84542 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:52:12.445136   84542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:12.499692   84542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:52:12.49002335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:12.500567   84542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:52:12.500599   84542 cni.go:84] Creating CNI manager for ""
	I1002 20:52:12.500669   84542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:52:12.500729   84542 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 20:52:12.503553   84542 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:52:12.504787   84542 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:52:12.505884   84542 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:52:12.506921   84542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:52:12.506957   84542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:52:12.506974   84542 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:52:12.506986   84542 cache.go:59] Caching tarball of preloaded images
	I1002 20:52:12.507092   84542 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:52:12.507108   84542 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:52:12.507207   84542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:52:12.527120   84542 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:52:12.527147   84542 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:52:12.527169   84542 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:52:12.527198   84542 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:52:12.527256   84542 start.go:365] duration metric: took 40.003µs to acquireMachinesLock for "ha-872795"
	I1002 20:52:12.527279   84542 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:52:12.527287   84542 fix.go:55] fixHost starting: 
	I1002 20:52:12.527480   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.544385   84542 fix.go:113] recreateIfNeeded on ha-872795: state=Stopped err=<nil>
	W1002 20:52:12.544415   84542 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:52:12.546060   84542 out.go:252] * Restarting existing docker container for "ha-872795" ...
	I1002 20:52:12.546129   84542 cli_runner.go:164] Run: docker start ha-872795
	I1002 20:52:12.772245   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.791338   84542 kic.go:430] container "ha-872795" state is running.
	I1002 20:52:12.791742   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:12.809326   84542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:52:12.809517   84542 machine.go:93] provisionDockerMachine start ...
	I1002 20:52:12.809567   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:12.827593   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:12.827887   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:12.827902   84542 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:52:12.828625   84542 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54168->127.0.0.1:32793: read: connection reset by peer
	I1002 20:52:15.972698   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:52:15.972735   84542 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:52:15.972797   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:15.990741   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:15.990956   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:15.990973   84542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:52:16.142437   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:52:16.142511   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.160361   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:16.160564   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:16.160579   84542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:52:16.302266   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:52:16.302296   84542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:52:16.302313   84542 ubuntu.go:190] setting up certificates
	I1002 20:52:16.302320   84542 provision.go:84] configureAuth start
	I1002 20:52:16.302377   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:16.319377   84542 provision.go:143] copyHostCerts
	I1002 20:52:16.319413   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:52:16.319443   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:52:16.319461   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:52:16.319537   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:52:16.319672   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:52:16.319702   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:52:16.319712   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:52:16.319756   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:52:16.319824   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:52:16.319866   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:52:16.319876   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:52:16.319916   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:52:16.319986   84542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:52:16.769818   84542 provision.go:177] copyRemoteCerts
	I1002 20:52:16.769886   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:52:16.769928   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.787463   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:16.887704   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:52:16.887767   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:52:16.904272   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:52:16.904329   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 20:52:16.920641   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:52:16.920707   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:52:16.936967   84542 provision.go:87] duration metric: took 634.632967ms to configureAuth
	I1002 20:52:16.937003   84542 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:52:16.937196   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:16.937308   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.955017   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:16.955246   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:16.955275   84542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:52:17.205259   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:52:17.205285   84542 machine.go:96] duration metric: took 4.395755954s to provisionDockerMachine
	I1002 20:52:17.205299   84542 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:52:17.205312   84542 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:52:17.205377   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:52:17.205412   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.223368   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.323770   84542 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:52:17.327504   84542 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:52:17.327529   84542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:52:17.327540   84542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:52:17.327579   84542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:52:17.327672   84542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:52:17.327683   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:52:17.327765   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:52:17.335362   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:52:17.351623   84542 start.go:297] duration metric: took 146.311149ms for postStartSetup
	I1002 20:52:17.351719   84542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:52:17.351772   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.369784   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.467957   84542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:52:17.472383   84542 fix.go:57] duration metric: took 4.945089023s for fixHost
	I1002 20:52:17.472411   84542 start.go:84] releasing machines lock for "ha-872795", held for 4.94513852s
	I1002 20:52:17.472467   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:17.489531   84542 ssh_runner.go:195] Run: cat /version.json
	I1002 20:52:17.489572   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.489612   84542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:52:17.489672   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.507746   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.508356   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.604764   84542 ssh_runner.go:195] Run: systemctl --version
	I1002 20:52:17.660345   84542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:52:17.693619   84542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:52:17.698130   84542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:52:17.698182   84542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:52:17.705758   84542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:52:17.705779   84542 start.go:496] detecting cgroup driver to use...
	I1002 20:52:17.705811   84542 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:52:17.705857   84542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:52:17.719313   84542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:52:17.730883   84542 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:52:17.730937   84542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:52:17.744989   84542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:52:17.757099   84542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:52:17.831778   84542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:52:17.908793   84542 docker.go:234] disabling docker service ...
	I1002 20:52:17.908841   84542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:52:17.922667   84542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:52:17.934489   84542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:52:18.017207   84542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:52:18.095150   84542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:52:18.107492   84542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:52:18.121597   84542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:52:18.121673   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.130616   84542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:52:18.130710   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.139375   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.148104   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.156885   84542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:52:18.164947   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.173732   84542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.182183   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.191547   84542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:52:18.199437   84542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:52:18.206383   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:18.282056   84542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:52:18.382052   84542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:52:18.382107   84542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:52:18.385801   84542 start.go:564] Will wait 60s for crictl version
	I1002 20:52:18.385851   84542 ssh_runner.go:195] Run: which crictl
	I1002 20:52:18.389097   84542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:52:18.412774   84542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:52:18.412858   84542 ssh_runner.go:195] Run: crio --version
	I1002 20:52:18.439483   84542 ssh_runner.go:195] Run: crio --version
	I1002 20:52:18.467303   84542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:52:18.468633   84542 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:52:18.485148   84542 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:52:18.489207   84542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:52:18.499465   84542 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:52:18.499579   84542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:52:18.499630   84542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:52:18.530560   84542 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:52:18.530580   84542 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:52:18.530619   84542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:52:18.555058   84542 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:52:18.555079   84542 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:52:18.555086   84542 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:52:18.555178   84542 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:52:18.555236   84542 ssh_runner.go:195] Run: crio config
	I1002 20:52:18.597955   84542 cni.go:84] Creating CNI manager for ""
	I1002 20:52:18.597975   84542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:52:18.597996   84542 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:52:18.598014   84542 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:52:18.598135   84542 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:52:18.598204   84542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:52:18.606091   84542 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:52:18.606154   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:52:18.613510   84542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:52:18.625264   84542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:52:18.636674   84542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:52:18.648668   84542 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:52:18.652199   84542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:52:18.661567   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:18.736767   84542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:52:18.757803   84542 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:52:18.757823   84542 certs.go:195] generating shared ca certs ...
	I1002 20:52:18.757838   84542 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:18.757992   84542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:52:18.758045   84542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:52:18.758057   84542 certs.go:257] generating profile certs ...
	I1002 20:52:18.758171   84542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:52:18.758242   84542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4
	I1002 20:52:18.758293   84542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:52:18.758306   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:52:18.758320   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:52:18.758339   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:52:18.758358   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:52:18.758374   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:52:18.758391   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:52:18.758406   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:52:18.758423   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:52:18.758486   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:52:18.758524   84542 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:52:18.758537   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:52:18.758570   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:52:18.758608   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:52:18.758638   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:52:18.758717   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:52:18.758756   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:52:18.758777   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.758793   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:52:18.759515   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:52:18.777064   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:52:18.794759   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:52:18.812947   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:52:18.834586   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 20:52:18.852127   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:52:18.867998   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:52:18.884379   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:52:18.900378   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:52:18.916888   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:52:18.933083   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:52:18.950026   84542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:52:18.961812   84542 ssh_runner.go:195] Run: openssl version
	I1002 20:52:18.967585   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:52:18.975573   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.979135   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.979186   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:19.012717   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:52:19.020807   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:52:19.029221   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.032921   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.032976   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.066315   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:52:19.074461   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:52:19.082874   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.086359   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.086398   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.120256   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:52:19.128343   84542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:52:19.131926   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:52:19.165248   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:52:19.198547   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:52:19.231870   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:52:19.270733   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:52:19.308097   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:52:19.350811   84542 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:19.350914   84542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:52:19.350967   84542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:52:19.377617   84542 cri.go:89] found id: ""
	I1002 20:52:19.377716   84542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:52:19.385510   84542 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:52:19.385528   84542 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:52:19.385564   84542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:52:19.392672   84542 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:52:19.393125   84542 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:19.393254   84542 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "ha-872795" cluster setting kubeconfig missing "ha-872795" context setting]
	I1002 20:52:19.393585   84542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.394226   84542 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:52:19.394732   84542 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:52:19.394755   84542 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:52:19.394766   84542 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:52:19.394772   84542 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:52:19.394777   84542 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:52:19.394827   84542 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:52:19.395209   84542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:52:19.402694   84542 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:52:19.402727   84542 kubeadm.go:601] duration metric: took 17.194012ms to restartPrimaryControlPlane
	I1002 20:52:19.402739   84542 kubeadm.go:402] duration metric: took 51.94088ms to StartCluster
	I1002 20:52:19.402759   84542 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.402828   84542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:19.403515   84542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.403777   84542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:52:19.403833   84542 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:52:19.403924   84542 addons.go:69] Setting storage-provisioner=true in profile "ha-872795"
	I1002 20:52:19.403946   84542 addons.go:238] Setting addon storage-provisioner=true in "ha-872795"
	I1002 20:52:19.403971   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:19.403980   84542 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:19.403941   84542 addons.go:69] Setting default-storageclass=true in profile "ha-872795"
	I1002 20:52:19.404021   84542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-872795"
	I1002 20:52:19.404264   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.404354   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.408264   84542 out.go:179] * Verifying Kubernetes components...
	I1002 20:52:19.409793   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:19.423163   84542 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:52:19.423551   84542 addons.go:238] Setting addon default-storageclass=true in "ha-872795"
	I1002 20:52:19.423620   84542 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:19.424084   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.424808   84542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:52:19.426120   84542 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:52:19.426142   84542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:52:19.426195   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:19.448766   84542 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:52:19.448788   84542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:52:19.448846   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:19.451068   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:19.470398   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:19.516165   84542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:52:19.528726   84542 node_ready.go:35] waiting up to 6m0s for node "ha-872795" to be "Ready" ...
	I1002 20:52:19.561681   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:52:19.574771   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:19.615332   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.615389   84542 retry.go:31] will retry after 249.743741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:19.627513   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.627547   84542 retry.go:31] will retry after 352.813922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.865823   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:19.919409   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.919443   84542 retry.go:31] will retry after 559.091624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.980554   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:20.031881   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.031917   84542 retry.go:31] will retry after 209.83145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.242384   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:20.294555   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.294585   84542 retry.go:31] will retry after 773.589013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.478908   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:20.529665   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.529699   84542 retry.go:31] will retry after 355.05837ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.885227   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:20.936319   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.936345   84542 retry.go:31] will retry after 627.720922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.069211   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:21.121770   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.121807   84542 retry.go:31] will retry after 1.242020524s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:21.529637   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:21.564790   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:21.617241   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.617280   84542 retry.go:31] will retry after 1.30407142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.364852   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:22.417314   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.417351   84542 retry.go:31] will retry after 1.575136446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.921528   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:22.971730   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.971760   84542 retry.go:31] will retry after 2.09594632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:23.530178   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:23.992771   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:24.045329   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:24.045366   84542 retry.go:31] will retry after 2.458367507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:25.068398   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:25.119280   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:25.119306   84542 retry.go:31] will retry after 2.791921669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:25.530272   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:26.504897   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:26.556428   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:26.556454   84542 retry.go:31] will retry after 1.449933818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:27.912150   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:27.963040   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:27.963072   84542 retry.go:31] will retry after 3.952294259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:28.007231   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:28.030134   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:28.059164   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:28.059196   84542 retry.go:31] will retry after 5.898569741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:30.529371   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:31.915686   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:31.966677   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:31.966712   84542 retry.go:31] will retry after 9.505491694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:33.029347   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:33.958860   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:34.011198   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:34.011224   84542 retry.go:31] will retry after 3.955486716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:35.029541   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:37.529312   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:37.967865   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:38.020105   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:38.020135   84542 retry.go:31] will retry after 14.344631794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:39.529637   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:41.472984   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:41.524664   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:41.524701   84542 retry.go:31] will retry after 14.131328473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:41.529983   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:43.530323   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:46.030267   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:48.529270   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:50.530344   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:52.365841   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:52.416707   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:52.416739   84542 retry.go:31] will retry after 8.612648854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:53.030261   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:55.530162   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:55.656412   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:55.708907   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:55.708941   84542 retry.go:31] will retry after 16.863018796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:57.530262   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:00.029774   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:01.029765   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:53:01.082336   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:01.082362   84542 retry.go:31] will retry after 16.45700088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:02.529635   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:04.530102   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:07.029312   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:09.029378   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:11.029761   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:12.572294   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:53:12.623265   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:12.623301   84542 retry.go:31] will retry after 31.20031459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:13.030189   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:15.529409   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:17.529701   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:17.539791   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:53:17.592998   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:17.593031   84542 retry.go:31] will retry after 46.85022317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:19.530271   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:22.029341   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:24.029449   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:26.529475   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:28.529984   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:31.029344   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:33.029703   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:35.030147   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:37.529225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:39.529316   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:41.529348   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:43.529864   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:43.824308   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:53:43.879519   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:43.879556   84542 retry.go:31] will retry after 26.923177778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:46.029215   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:48.030264   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:50.529262   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:53.029201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:55.029266   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:57.529247   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:59.529324   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:01.529385   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:03.530255   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:54:04.443642   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:54:04.494168   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:54:04.494289   84542 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 20:54:06.029363   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:08.030124   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:10.529280   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:54:10.803751   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:54:10.855207   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:54:10.855322   84542 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:54:10.857534   84542 out.go:179] * Enabled addons: 
	I1002 20:54:10.858858   84542 addons.go:514] duration metric: took 1m51.455034236s for enable addons: enabled=[]
	W1002 20:54:12.530223   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:15.030268   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:17.529366   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:19.529680   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:22.029332   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:24.529254   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:26.530225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:28.530316   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:31.030201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:33.530203   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:36.029295   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:38.030258   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:40.530189   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:43.030209   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:45.530056   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:47.530192   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:50.030236   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:52.529213   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:54.530066   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:57.030049   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:59.030131   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:01.030201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:03.530277   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:06.030191   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:08.530048   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:10.530113   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:12.530257   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:15.030055   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:17.030094   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:19.529236   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:21.530190   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:24.030104   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:26.530048   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:28.530115   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:31.030158   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:33.030337   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:35.530143   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:37.530179   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:40.029256   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:42.030319   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:44.530235   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:47.030230   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:49.529240   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:51.530107   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:53.530213   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:56.030066   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:58.030194   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:00.530210   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:03.030252   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:05.530101   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:08.030040   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:10.030200   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:12.530199   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:15.030085   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:17.530117   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:20.030182   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:22.529368   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:24.529529   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:26.529996   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:29.029489   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:31.029783   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:33.529383   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:35.529618   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:37.530223   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:40.029460   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:42.029806   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:44.529381   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:46.529604   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:48.530149   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:51.029550   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:53.030094   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:55.529502   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:57.529987   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:00.029421   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:02.029789   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:04.030225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:06.529307   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:08.529566   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:10.529758   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:12.530115   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:15.030229   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:17.529298   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:19.529553   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:22.029498   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:24.029732   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:26.030184   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:28.529315   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:30.529396   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:32.529569   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:34.529815   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:36.530227   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:39.029287   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:41.029371   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:43.529376   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:46.029375   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:48.029705   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:50.030099   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:52.529285   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:55.029242   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:57.529274   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:59.529506   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:01.529548   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:03.530105   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:06.029214   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:08.029269   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:10.529276   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:13.029280   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:15.029340   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:17.529431   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:58:19.529451   84542 node_ready.go:38] duration metric: took 6m0.000520422s for node "ha-872795" to be "Ready" ...
	I1002 20:58:19.532185   84542 out.go:203] 
	W1002 20:58:19.533451   84542 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:58:19.533467   84542 out.go:285] * 
	W1002 20:58:19.535106   84542 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:58:19.536199   84542 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.865924233Z" level=info msg="createCtr: removing container 26b741597e0e06c13ef99306cb62623e7932b12637d8672cd7ecf500ecfa352b" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.865962224Z" level=info msg="createCtr: deleting container 26b741597e0e06c13ef99306cb62623e7932b12637d8672cd7ecf500ecfa352b from storage" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.86783495Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.843775999Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6b82fe88-17ec-4c57-9b05-c19050877732 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.844719915Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=abcd93c5-df4d-45f8-92bb-46d6ca77b31e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.845470274Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-872795/kube-scheduler" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.84568952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.848829819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.849375846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.865375235Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.866624528Z" level=info msg="createCtr: deleting container ID ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a from idIndex" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.866669375Z" level=info msg="createCtr: removing container ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.86670509Z" level=info msg="createCtr: deleting container ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a from storage" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.868605064Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.844194563Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3e833565-1e5c-42e1-8f3e-a9639ca0d16e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.845180374Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f875f5da-ae52-4e6e-845d-c04efefdf72d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.846020973Z" level=info msg="Creating container: kube-system/etcd-ha-872795/etcd" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.846236658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.850283519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.85081737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.866265457Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.867607802Z" level=info msg="createCtr: deleting container ID a1c2e948ca5e140cb9e61dd5905442a58499a9a28ff3813bf75e61bfa0dc3bc4 from idIndex" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.867644643Z" level=info msg="createCtr: removing container a1c2e948ca5e140cb9e61dd5905442a58499a9a28ff3813bf75e61bfa0dc3bc4" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.86769622Z" level=info msg="createCtr: deleting container a1c2e948ca5e140cb9e61dd5905442a58499a9a28ff3813bf75e61bfa0dc3bc4 from storage" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.869635215Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:23.320679    2356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:23.321215    2356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:23.322706    2356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:23.323102    2356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:23.324593    2356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:58:23 up  1:40,  0 user,  load average: 0.00, 0.07, 0.13
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.868180     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:16 ha-872795 kubelet[671]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:16 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.868209     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.843341     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.868908     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:58:17 ha-872795 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:17 ha-872795 kubelet[671]:  > podSandboxID="7a6998e86547c4fc510950e02f70bd6ee0f981ac1b56d6bfea37794d1ce0aad6"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.869005     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:17 ha-872795 kubelet[671]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:17 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.869035     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	Oct 02 20:58:18 ha-872795 kubelet[671]: E1002 20:58:18.858006     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	Oct 02 20:58:21 ha-872795 kubelet[671]: E1002 20:58:21.481236     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:58:21 ha-872795 kubelet[671]: I1002 20:58:21.651338     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:58:21 ha-872795 kubelet[671]: E1002 20:58:21.651709     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:58:22 ha-872795 kubelet[671]: E1002 20:58:22.015959     671 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7d8e662d313  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:52:18.833699603 +0000 UTC m=+0.073883216,LastTimestamp:2025-10-02 20:52:18.833699603 +0000 UTC m=+0.073883216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:58:22 ha-872795 kubelet[671]: E1002 20:58:22.843722     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:58:22 ha-872795 kubelet[671]: E1002 20:58:22.869953     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:58:22 ha-872795 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:22 ha-872795 kubelet[671]:  > podSandboxID="43319f87656d37bf5aa74dc1698fbdfe09fd9b593217b0c7aa626866d6c9e434"
	Oct 02 20:58:22 ha-872795 kubelet[671]: E1002 20:58:22.870068     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:22 ha-872795 kubelet[671]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:22 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:22 ha-872795 kubelet[671]: E1002 20:58:22.870110     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 2 (279.913697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-872795" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-872795\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-872795\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-872795\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-872795" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-872795\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-872795\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-872795\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --o
utput json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-872795
helpers_test.go:243: (dbg) docker inspect ha-872795:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	        "Created": "2025-10-02T20:35:02.638484933Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 84738,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:52:12.570713623Z",
	            "FinishedAt": "2025-10-02T20:52:11.285843333Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/hosts",
	        "LogPath": "/var/lib/docker/containers/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55/b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55-json.log",
	        "Name": "/ha-872795",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-872795:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-872795",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b7ef56112a2f915a8466769e3dc48ccb5d179855d5609349544f8d527edabb55",
	                "LowerDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b8ec6cf65fe7022fd06c03f1c5ef6cbb5901b24ed40016173d6fd780abe4b96b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-872795",
	                "Source": "/var/lib/docker/volumes/ha-872795/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-872795",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-872795",
	                "name.minikube.sigs.k8s.io": "ha-872795",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbe6cbb50399e5abdf61351254e97adbece9a5a3bd792d1e2b031f8c07b08d4b",
	            "SandboxKey": "/var/run/docker/netns/dbe6cbb50399",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-872795": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:64:7f:41:cf:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4acb601080f7524e5ab356fb9560b1bbe64fec573d1618de9389b9e2e3e9b610",
	                    "EndpointID": "5bcad108b7e4fcc5d99139f5eebb0ef8974d98c9438fae15ef0758e9e96b01c7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-872795",
	                        "b7ef56112a2f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-872795 -n ha-872795: exit status 2 (273.878085ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-872795 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:43 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ kubectl │ ha-872795 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node add --alsologtostderr -v 5                                                    │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:44 UTC │                     │
	│ node    │ ha-872795 node stop m02 --alsologtostderr -v 5                                               │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node start m02 --alsologtostderr -v 5                                              │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │ 02 Oct 25 20:45 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5                                           │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:45 UTC │                     │
	│ node    │ ha-872795 node list --alsologtostderr -v 5                                                   │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ node    │ ha-872795 node delete m03 --alsologtostderr -v 5                                             │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                                        │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │ 02 Oct 25 20:52 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ node    │ ha-872795 node add --control-plane --alsologtostderr -v 5                                    │ ha-872795 │ jenkins │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:52:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:52:12.352153   84542 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:52:12.352281   84542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.352291   84542 out.go:374] Setting ErrFile to fd 2...
	I1002 20:52:12.352298   84542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:52:12.353016   84542 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:52:12.353847   84542 out.go:368] Setting JSON to false
	I1002 20:52:12.354816   84542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5681,"bootTime":1759432651,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:52:12.354901   84542 start.go:140] virtualization: kvm guest
	I1002 20:52:12.356608   84542 out.go:179] * [ha-872795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:52:12.358039   84542 notify.go:221] Checking for updates...
	I1002 20:52:12.358067   84542 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:52:12.359475   84542 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:52:12.360841   84542 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:12.362132   84542 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:52:12.363282   84542 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:52:12.364343   84542 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:52:12.365896   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:12.366331   84542 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:52:12.389014   84542 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:52:12.389115   84542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:12.440987   84542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:52:12.431594508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:12.441088   84542 docker.go:319] overlay module found
	I1002 20:52:12.443751   84542 out.go:179] * Using the docker driver based on existing profile
	I1002 20:52:12.444967   84542 start.go:306] selected driver: docker
	I1002 20:52:12.444981   84542 start.go:936] validating driver "docker" against &{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:12.445063   84542 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:52:12.445136   84542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:52:12.499692   84542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:52:12.49002335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:52:12.500567   84542 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:52:12.500599   84542 cni.go:84] Creating CNI manager for ""
	I1002 20:52:12.500669   84542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:52:12.500729   84542 start.go:350] cluster config:
	{Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 20:52:12.503553   84542 out.go:179] * Starting "ha-872795" primary control-plane node in "ha-872795" cluster
	I1002 20:52:12.504787   84542 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 20:52:12.505884   84542 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:52:12.506921   84542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:52:12.506957   84542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:52:12.506974   84542 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:52:12.506986   84542 cache.go:59] Caching tarball of preloaded images
	I1002 20:52:12.507092   84542 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:52:12.507108   84542 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:52:12.507207   84542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:52:12.527120   84542 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:52:12.527147   84542 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:52:12.527169   84542 cache.go:233] Successfully downloaded all kic artifacts
	I1002 20:52:12.527198   84542 start.go:361] acquireMachinesLock for ha-872795: {Name:mk6cf6c7b5799e46782fb743c5dce91b5a08034a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:52:12.527256   84542 start.go:365] duration metric: took 40.003µs to acquireMachinesLock for "ha-872795"
	I1002 20:52:12.527279   84542 start.go:97] Skipping create...Using existing machine configuration
	I1002 20:52:12.527287   84542 fix.go:55] fixHost starting: 
	I1002 20:52:12.527480   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.544385   84542 fix.go:113] recreateIfNeeded on ha-872795: state=Stopped err=<nil>
	W1002 20:52:12.544415   84542 fix.go:139] unexpected machine state, will restart: <nil>
	I1002 20:52:12.546060   84542 out.go:252] * Restarting existing docker container for "ha-872795" ...
	I1002 20:52:12.546129   84542 cli_runner.go:164] Run: docker start ha-872795
	I1002 20:52:12.772245   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:12.791338   84542 kic.go:430] container "ha-872795" state is running.
	I1002 20:52:12.791742   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:12.809326   84542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/config.json ...
	I1002 20:52:12.809517   84542 machine.go:93] provisionDockerMachine start ...
	I1002 20:52:12.809567   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:12.827593   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:12.827887   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:12.827902   84542 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:52:12.828625   84542 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54168->127.0.0.1:32793: read: connection reset by peer
	I1002 20:52:15.972698   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:52:15.972735   84542 ubuntu.go:182] provisioning hostname "ha-872795"
	I1002 20:52:15.972797   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:15.990741   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:15.990956   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:15.990973   84542 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-872795 && echo "ha-872795" | sudo tee /etc/hostname
	I1002 20:52:16.142437   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-872795
	
	I1002 20:52:16.142511   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.160361   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:16.160564   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:16.160579   84542 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-872795' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-872795/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-872795' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:52:16.302266   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:52:16.302296   84542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 20:52:16.302313   84542 ubuntu.go:190] setting up certificates
	I1002 20:52:16.302320   84542 provision.go:84] configureAuth start
	I1002 20:52:16.302377   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:16.319377   84542 provision.go:143] copyHostCerts
	I1002 20:52:16.319413   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:52:16.319443   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 20:52:16.319461   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 20:52:16.319537   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 20:52:16.319672   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:52:16.319702   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 20:52:16.319712   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 20:52:16.319756   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 20:52:16.319824   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:52:16.319866   84542 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 20:52:16.319876   84542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 20:52:16.319916   84542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 20:52:16.319986   84542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.ha-872795 san=[127.0.0.1 192.168.49.2 ha-872795 localhost minikube]
	I1002 20:52:16.769818   84542 provision.go:177] copyRemoteCerts
	I1002 20:52:16.769886   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:52:16.769928   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.787463   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:16.887704   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:52:16.887767   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:52:16.904272   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:52:16.904329   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1002 20:52:16.920641   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:52:16.920707   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:52:16.936967   84542 provision.go:87] duration metric: took 634.632967ms to configureAuth
	I1002 20:52:16.937003   84542 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:52:16.937196   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:16.937308   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:16.955017   84542 main.go:141] libmachine: Using SSH client type: native
	I1002 20:52:16.955246   84542 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 20:52:16.955275   84542 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:52:17.205259   84542 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:52:17.205285   84542 machine.go:96] duration metric: took 4.395755954s to provisionDockerMachine
	I1002 20:52:17.205299   84542 start.go:294] postStartSetup for "ha-872795" (driver="docker")
	I1002 20:52:17.205312   84542 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:52:17.205377   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:52:17.205412   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.223368   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.323770   84542 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:52:17.327504   84542 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:52:17.327529   84542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:52:17.327540   84542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 20:52:17.327579   84542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 20:52:17.327672   84542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 20:52:17.327683   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /etc/ssl/certs/128512.pem
	I1002 20:52:17.327765   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 20:52:17.335362   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:52:17.351623   84542 start.go:297] duration metric: took 146.311149ms for postStartSetup
	I1002 20:52:17.351719   84542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:52:17.351772   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.369784   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.467957   84542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:52:17.472383   84542 fix.go:57] duration metric: took 4.945089023s for fixHost
	I1002 20:52:17.472411   84542 start.go:84] releasing machines lock for "ha-872795", held for 4.94513852s
	I1002 20:52:17.472467   84542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-872795
	I1002 20:52:17.489531   84542 ssh_runner.go:195] Run: cat /version.json
	I1002 20:52:17.489572   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.489612   84542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:52:17.489672   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:17.507746   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.508356   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:17.604764   84542 ssh_runner.go:195] Run: systemctl --version
	I1002 20:52:17.660345   84542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:52:17.693619   84542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:52:17.698130   84542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:52:17.698182   84542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:52:17.705758   84542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:52:17.705779   84542 start.go:496] detecting cgroup driver to use...
	I1002 20:52:17.705811   84542 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:52:17.705857   84542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:52:17.719313   84542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:52:17.730883   84542 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:52:17.730937   84542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:52:17.744989   84542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:52:17.757099   84542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:52:17.831778   84542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:52:17.908793   84542 docker.go:234] disabling docker service ...
	I1002 20:52:17.908841   84542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:52:17.922667   84542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:52:17.934489   84542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:52:18.017207   84542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:52:18.095150   84542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:52:18.107492   84542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:52:18.121597   84542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:52:18.121673   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.130616   84542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:52:18.130710   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.139375   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.148104   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.156885   84542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:52:18.164947   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.173732   84542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.182183   84542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:52:18.191547   84542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:52:18.199437   84542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:52:18.206383   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:18.282056   84542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:52:18.382052   84542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:52:18.382107   84542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:52:18.385801   84542 start.go:564] Will wait 60s for crictl version
	I1002 20:52:18.385851   84542 ssh_runner.go:195] Run: which crictl
	I1002 20:52:18.389097   84542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:52:18.412774   84542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:52:18.412858   84542 ssh_runner.go:195] Run: crio --version
	I1002 20:52:18.439483   84542 ssh_runner.go:195] Run: crio --version
	I1002 20:52:18.467303   84542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:52:18.468633   84542 cli_runner.go:164] Run: docker network inspect ha-872795 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:52:18.485148   84542 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:52:18.489207   84542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:52:18.499465   84542 kubeadm.go:883] updating cluster {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:52:18.499579   84542 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:52:18.499630   84542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:52:18.530560   84542 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:52:18.530580   84542 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:52:18.530619   84542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:52:18.555058   84542 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:52:18.555079   84542 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:52:18.555086   84542 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:52:18.555178   84542 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-872795 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:52:18.555236   84542 ssh_runner.go:195] Run: crio config
	I1002 20:52:18.597955   84542 cni.go:84] Creating CNI manager for ""
	I1002 20:52:18.597975   84542 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 20:52:18.597996   84542 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:52:18.598014   84542 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-872795 NodeName:ha-872795 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:52:18.598135   84542 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-872795"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:52:18.598204   84542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:52:18.606091   84542 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:52:18.606154   84542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:52:18.613510   84542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 20:52:18.625264   84542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:52:18.636674   84542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 20:52:18.648668   84542 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:52:18.652199   84542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:52:18.661567   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:18.736767   84542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:52:18.757803   84542 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795 for IP: 192.168.49.2
	I1002 20:52:18.757823   84542 certs.go:195] generating shared ca certs ...
	I1002 20:52:18.757838   84542 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:18.757992   84542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 20:52:18.758045   84542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 20:52:18.758057   84542 certs.go:257] generating profile certs ...
	I1002 20:52:18.758171   84542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key
	I1002 20:52:18.758242   84542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key.da7297d4
	I1002 20:52:18.758293   84542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key
	I1002 20:52:18.758306   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:52:18.758320   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:52:18.758339   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:52:18.758358   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:52:18.758374   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:52:18.758391   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:52:18.758406   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:52:18.758423   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:52:18.758486   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 20:52:18.758524   84542 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 20:52:18.758537   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:52:18.758570   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:52:18.758608   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:52:18.758638   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 20:52:18.758717   84542 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 20:52:18.758756   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> /usr/share/ca-certificates/128512.pem
	I1002 20:52:18.758777   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.758793   84542 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem -> /usr/share/ca-certificates/12851.pem
	I1002 20:52:18.759515   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:52:18.777064   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 20:52:18.794759   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:52:18.812947   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:52:18.834586   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 20:52:18.852127   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:52:18.867998   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:52:18.884379   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:52:18.900378   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 20:52:18.916888   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:52:18.933083   84542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 20:52:18.950026   84542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:52:18.961812   84542 ssh_runner.go:195] Run: openssl version
	I1002 20:52:18.967585   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:52:18.975573   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.979135   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:18.979186   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:52:19.012717   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:52:19.020807   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 20:52:19.029221   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.032921   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.032976   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 20:52:19.066315   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 20:52:19.074461   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 20:52:19.082874   84542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.086359   84542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.086398   84542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 20:52:19.120256   84542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:52:19.128343   84542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:52:19.131926   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:52:19.165248   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:52:19.198547   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:52:19.231870   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:52:19.270733   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:52:19.308097   84542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:52:19.350811   84542 kubeadm.go:400] StartCluster: {Name:ha-872795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-872795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:52:19.350914   84542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:52:19.350967   84542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:52:19.377617   84542 cri.go:89] found id: ""
	I1002 20:52:19.377716   84542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:52:19.385510   84542 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:52:19.385528   84542 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:52:19.385564   84542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:52:19.392672   84542 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:52:19.393125   84542 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-872795" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:19.393254   84542 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-9327/kubeconfig needs updating (will repair): [kubeconfig missing "ha-872795" cluster setting kubeconfig missing "ha-872795" context setting]
	I1002 20:52:19.393585   84542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.394226   84542 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:52:19.394732   84542 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:52:19.394755   84542 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:52:19.394766   84542 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:52:19.394772   84542 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:52:19.394777   84542 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:52:19.394827   84542 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:52:19.395209   84542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:52:19.402694   84542 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:52:19.402727   84542 kubeadm.go:601] duration metric: took 17.194012ms to restartPrimaryControlPlane
	I1002 20:52:19.402739   84542 kubeadm.go:402] duration metric: took 51.94088ms to StartCluster
	I1002 20:52:19.402759   84542 settings.go:142] acquiring lock: {Name:mk7417292ad351472478c5266970b58ebdd4130e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.402828   84542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:52:19.403515   84542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/kubeconfig: {Name:mk1223622f21778f4725717ab9b0c58140eca86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:52:19.403777   84542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:52:19.403833   84542 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:52:19.403924   84542 addons.go:69] Setting storage-provisioner=true in profile "ha-872795"
	I1002 20:52:19.403946   84542 addons.go:238] Setting addon storage-provisioner=true in "ha-872795"
	I1002 20:52:19.403971   84542 config.go:182] Loaded profile config "ha-872795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:52:19.403980   84542 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:19.403941   84542 addons.go:69] Setting default-storageclass=true in profile "ha-872795"
	I1002 20:52:19.404021   84542 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-872795"
	I1002 20:52:19.404264   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.404354   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.408264   84542 out.go:179] * Verifying Kubernetes components...
	I1002 20:52:19.409793   84542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:52:19.423163   84542 kapi.go:59] client config for ha-872795: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/profiles/ha-872795/client.key", CAFile:"/home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:52:19.423551   84542 addons.go:238] Setting addon default-storageclass=true in "ha-872795"
	I1002 20:52:19.423620   84542 host.go:66] Checking if "ha-872795" exists ...
	I1002 20:52:19.424084   84542 cli_runner.go:164] Run: docker container inspect ha-872795 --format={{.State.Status}}
	I1002 20:52:19.424808   84542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:52:19.426120   84542 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:52:19.426142   84542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:52:19.426195   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:19.448766   84542 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:52:19.448788   84542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:52:19.448846   84542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-872795
	I1002 20:52:19.451068   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:19.470398   84542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/ha-872795/id_rsa Username:docker}
	I1002 20:52:19.516165   84542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:52:19.528726   84542 node_ready.go:35] waiting up to 6m0s for node "ha-872795" to be "Ready" ...
	I1002 20:52:19.561681   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:52:19.574771   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:19.615332   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.615389   84542 retry.go:31] will retry after 249.743741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:19.627513   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.627547   84542 retry.go:31] will retry after 352.813922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.865823   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:19.919409   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.919443   84542 retry.go:31] will retry after 559.091624ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:19.980554   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:20.031881   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.031917   84542 retry.go:31] will retry after 209.83145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.242384   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:20.294555   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.294585   84542 retry.go:31] will retry after 773.589013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.478908   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:20.529665   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.529699   84542 retry.go:31] will retry after 355.05837ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.885227   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:20.936319   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:20.936345   84542 retry.go:31] will retry after 627.720922ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.069211   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:21.121770   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.121807   84542 retry.go:31] will retry after 1.242020524s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:21.529637   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:21.564790   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:21.617241   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:21.617280   84542 retry.go:31] will retry after 1.30407142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.364852   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:22.417314   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.417351   84542 retry.go:31] will retry after 1.575136446s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.921528   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:22.971730   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:22.971760   84542 retry.go:31] will retry after 2.09594632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:23.530178   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:23.992771   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:24.045329   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:24.045366   84542 retry.go:31] will retry after 2.458367507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:25.068398   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:25.119280   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:25.119306   84542 retry.go:31] will retry after 2.791921669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:25.530272   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:26.504897   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:26.556428   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:26.556454   84542 retry.go:31] will retry after 1.449933818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:27.912150   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:27.963040   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:27.963072   84542 retry.go:31] will retry after 3.952294259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:28.007231   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:28.030134   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:28.059164   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:28.059196   84542 retry.go:31] will retry after 5.898569741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:30.529371   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:31.915686   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:31.966677   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:31.966712   84542 retry.go:31] will retry after 9.505491694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:33.029347   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:33.958860   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:34.011198   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:34.011224   84542 retry.go:31] will retry after 3.955486716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:35.029541   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:37.529312   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:37.967865   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:38.020105   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:38.020135   84542 retry.go:31] will retry after 14.344631794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:39.529637   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:41.472984   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:41.524664   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:41.524701   84542 retry.go:31] will retry after 14.131328473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:41.529983   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:43.530323   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:46.030267   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:48.529270   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:50.530344   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:52.365841   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:52:52.416707   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:52.416739   84542 retry.go:31] will retry after 8.612648854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:53.030261   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:52:55.530162   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:52:55.656412   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:52:55.708907   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:52:55.708941   84542 retry.go:31] will retry after 16.863018796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:52:57.530262   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:00.029774   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:01.029765   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:53:01.082336   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:01.082362   84542 retry.go:31] will retry after 16.45700088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:02.529635   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:04.530102   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:07.029312   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:09.029378   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:11.029761   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:12.572294   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:53:12.623265   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:12.623301   84542 retry.go:31] will retry after 31.20031459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:13.030189   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:15.529409   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:17.529701   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:17.539791   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:53:17.592998   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:17.593031   84542 retry.go:31] will retry after 46.85022317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:19.530271   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:22.029341   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:24.029449   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:26.529475   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:28.529984   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:31.029344   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:33.029703   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:35.030147   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:37.529225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:39.529316   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:41.529348   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:43.529864   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:53:43.824308   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:53:43.879519   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:53:43.879556   84542 retry.go:31] will retry after 26.923177778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:53:46.029215   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:48.030264   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:50.529262   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:53.029201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:55.029266   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:57.529247   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:53:59.529324   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:01.529385   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:03.530255   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:54:04.443642   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 20:54:04.494168   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:54:04.494289   84542 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 20:54:06.029363   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:08.030124   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:10.529280   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:54:10.803751   84542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 20:54:10.855207   84542 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:54:10.855322   84542 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:54:10.857534   84542 out.go:179] * Enabled addons: 
	I1002 20:54:10.858858   84542 addons.go:514] duration metric: took 1m51.455034236s for enable addons: enabled=[]
	W1002 20:54:12.530223   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:15.030268   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:17.529366   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:19.529680   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:22.029332   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:24.529254   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:26.530225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:28.530316   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:31.030201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:33.530203   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:36.029295   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:38.030258   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:40.530189   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:43.030209   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:45.530056   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:47.530192   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:50.030236   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:52.529213   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:54.530066   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:57.030049   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:54:59.030131   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:01.030201   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:03.530277   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:06.030191   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:08.530048   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:10.530113   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:12.530257   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:15.030055   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:17.030094   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:19.529236   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:21.530190   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:24.030104   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:26.530048   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:28.530115   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:31.030158   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:33.030337   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:35.530143   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:37.530179   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:40.029256   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:42.030319   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:44.530235   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:47.030230   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:49.529240   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:51.530107   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:53.530213   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:56.030066   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:55:58.030194   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:00.530210   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:03.030252   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:05.530101   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:08.030040   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:10.030200   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:12.530199   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:15.030085   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:17.530117   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:20.030182   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:22.529368   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:24.529529   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:26.529996   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:29.029489   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:31.029783   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:33.529383   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:35.529618   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:37.530223   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:40.029460   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:42.029806   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:44.529381   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:46.529604   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:48.530149   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:51.029550   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:53.030094   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:55.529502   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:56:57.529987   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:00.029421   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:02.029789   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:04.030225   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:06.529307   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:08.529566   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:10.529758   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:12.530115   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:15.030229   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:17.529298   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:19.529553   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:22.029498   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:24.029732   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:26.030184   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:28.529315   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:30.529396   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:32.529569   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:34.529815   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:36.530227   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:39.029287   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:41.029371   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:43.529376   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:46.029375   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:48.029705   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:50.030099   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:52.529285   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:55.029242   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:57.529274   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:57:59.529506   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:01.529548   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:03.530105   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:06.029214   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:08.029269   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:10.529276   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:13.029280   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:15.029340   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 20:58:17.529431   84542 node_ready.go:55] error getting node "ha-872795" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-872795": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 20:58:19.529451   84542 node_ready.go:38] duration metric: took 6m0.000520422s for node "ha-872795" to be "Ready" ...
	I1002 20:58:19.532185   84542 out.go:203] 
	W1002 20:58:19.533451   84542 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:58:19.533467   84542 out.go:285] * 
	W1002 20:58:19.535106   84542 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:58:19.536199   84542 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.865924233Z" level=info msg="createCtr: removing container 26b741597e0e06c13ef99306cb62623e7932b12637d8672cd7ecf500ecfa352b" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.865962224Z" level=info msg="createCtr: deleting container 26b741597e0e06c13ef99306cb62623e7932b12637d8672cd7ecf500ecfa352b from storage" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:16 ha-872795 crio[522]: time="2025-10-02T20:58:16.86783495Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-872795_kube-system_107e053e4ed538c93835a81754178211_0" id=b99a9848-6d79-433c-a55e-d70785f3b430 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.843775999Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6b82fe88-17ec-4c57-9b05-c19050877732 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.844719915Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=abcd93c5-df4d-45f8-92bb-46d6ca77b31e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.845470274Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-872795/kube-scheduler" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.84568952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.848829819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.849375846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.865375235Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.866624528Z" level=info msg="createCtr: deleting container ID ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a from idIndex" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.866669375Z" level=info msg="createCtr: removing container ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.86670509Z" level=info msg="createCtr: deleting container ea61318edf925528f59c496e81abeb8580eda7ad555ee2cd11682aea6e07dc2a from storage" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:17 ha-872795 crio[522]: time="2025-10-02T20:58:17.868605064Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-872795_kube-system_9558d13dd1c45ecd0f3d491377941404_0" id=84048655-6c49-4000-ab9f-b6a774a70393 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.844194563Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3e833565-1e5c-42e1-8f3e-a9639ca0d16e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.845180374Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f875f5da-ae52-4e6e-845d-c04efefdf72d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.846020973Z" level=info msg="Creating container: kube-system/etcd-ha-872795/etcd" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.846236658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.850283519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.85081737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.866265457Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.867607802Z" level=info msg="createCtr: deleting container ID a1c2e948ca5e140cb9e61dd5905442a58499a9a28ff3813bf75e61bfa0dc3bc4 from idIndex" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.867644643Z" level=info msg="createCtr: removing container a1c2e948ca5e140cb9e61dd5905442a58499a9a28ff3813bf75e61bfa0dc3bc4" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.86769622Z" level=info msg="createCtr: deleting container a1c2e948ca5e140cb9e61dd5905442a58499a9a28ff3813bf75e61bfa0dc3bc4 from storage" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:58:22 ha-872795 crio[522]: time="2025-10-02T20:58:22.869635215Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-872795_kube-system_d62260a65d00c99f27090ce6484101a9_0" id=79b24449-197d-46d1-a26d-db19b5a1cfea name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:24.808272    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:24.808846    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:24.810433    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:24.810894    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:58:24.812545    2529 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:58:24 up  1:40,  0 user,  load average: 0.00, 0.07, 0.13
	Linux ha-872795 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.868180     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:16 ha-872795 kubelet[671]:         container kube-apiserver start failed in pod kube-apiserver-ha-872795_kube-system(107e053e4ed538c93835a81754178211): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:16 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:16 ha-872795 kubelet[671]: E1002 20:58:16.868209     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-872795" podUID="107e053e4ed538c93835a81754178211"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.843341     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.868908     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:58:17 ha-872795 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:17 ha-872795 kubelet[671]:  > podSandboxID="7a6998e86547c4fc510950e02f70bd6ee0f981ac1b56d6bfea37794d1ce0aad6"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.869005     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:17 ha-872795 kubelet[671]:         container kube-scheduler start failed in pod kube-scheduler-ha-872795_kube-system(9558d13dd1c45ecd0f3d491377941404): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:17 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:17 ha-872795 kubelet[671]: E1002 20:58:17.869035     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-872795" podUID="9558d13dd1c45ecd0f3d491377941404"
	Oct 02 20:58:18 ha-872795 kubelet[671]: E1002 20:58:18.858006     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-872795\" not found"
	Oct 02 20:58:21 ha-872795 kubelet[671]: E1002 20:58:21.481236     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-872795?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 20:58:21 ha-872795 kubelet[671]: I1002 20:58:21.651338     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-872795"
	Oct 02 20:58:21 ha-872795 kubelet[671]: E1002 20:58:21.651709     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-872795"
	Oct 02 20:58:22 ha-872795 kubelet[671]: E1002 20:58:22.015959     671 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-872795.186ac7d8e662d313  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-872795,UID:ha-872795,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-872795 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-872795,},FirstTimestamp:2025-10-02 20:52:18.833699603 +0000 UTC m=+0.073883216,LastTimestamp:2025-10-02 20:52:18.833699603 +0000 UTC m=+0.073883216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-872795,}"
	Oct 02 20:58:22 ha-872795 kubelet[671]: E1002 20:58:22.843722     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-872795\" not found" node="ha-872795"
	Oct 02 20:58:22 ha-872795 kubelet[671]: E1002 20:58:22.869953     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:58:22 ha-872795 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:22 ha-872795 kubelet[671]:  > podSandboxID="43319f87656d37bf5aa74dc1698fbdfe09fd9b593217b0c7aa626866d6c9e434"
	Oct 02 20:58:22 ha-872795 kubelet[671]: E1002 20:58:22.870068     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:58:22 ha-872795 kubelet[671]:         container etcd start failed in pod etcd-ha-872795_kube-system(d62260a65d00c99f27090ce6484101a9): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:58:22 ha-872795 kubelet[671]:  > logger="UnhandledError"
	Oct 02 20:58:22 ha-872795 kubelet[671]: E1002 20:58:22.870110     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-872795" podUID="d62260a65d00c99f27090ce6484101a9"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-872795 -n ha-872795: exit status 2 (275.988325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-872795" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.49s)

                                                
                                    
x
+
TestJSONOutput/start/Command (497.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-106808 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1002 21:00:54.131191   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:05:54.132867   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-106808 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (8m17.71372921s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0cb7cc19-7286-4a97-8308-58381918e74a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-106808] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d52e1474-92aa-4059-9d23-d43f49a61082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"1b83eaa4-998f-4b77-a896-ecace08241d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"64c9f2aa-a9fb-431c-aa22-6e9071635d2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig"}}
	{"specversion":"1.0","id":"bad10368-5638-4184-8fc1-13da7c871dc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube"}}
	{"specversion":"1.0","id":"76e811f7-c73b-4a43-ab3b-266c014040fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9f2242aa-13e1-4a0a-b943-2d3abd9972a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"727bc84d-a347-477d-9a83-7ff0faba97b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fccbe584-88aa-4fd3-be62-f356e0c1d209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a3177b2d-641b-4b0b-9bba-0f30a050567d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-106808\" primary control-plane node in \"json-output-106808\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"02819643-11ce-4bac-9c07-4c344730c9df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3260e636-7d64-4ebc-b977-5cd0c7670045","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d1f2814-8ee3-484a-afe3-7858c1412d51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"11","message":"Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...","name":"Preparing Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d24d626-42ee-47e2-b6d2-5251811b894d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e185dda-1f80-4692-a650-3cbea535141e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"f02bebe7-292c-4f5a-87a1-005095a93b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Pri
nting the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\
n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-106808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-106808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writi
ng \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the ku
belet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.737118ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00002309s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000056701s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000100381s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using y
our preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused
, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"fa485dd8-03c1-4265-b0c0-b4702bd5f49e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"18c921ee-905d-4bb4-9637-40fc754a473c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7d01580-1658-4cf3-a8a0-e5d736dd6ef9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the outpu
t from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using
existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[
etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/health
z. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.000949561s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000068559s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00009778s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000226908s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pa
use'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wa
it returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"c8106b99-4374-4e33-bd9d-9b86d6806435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system v
erification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/va
r/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy ku
belet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.000949561s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000068559s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00009778s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000226908s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.4
9.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher","name":"GUEST_START","url":""}}
	{"specversion":"1.0","id":"2b6daa3d-64e5-4b73-894e-5213c488869f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 start -p json-output-106808 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio": exit status 80
--- FAIL: TestJSONOutput/start/Command (497.72s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 12 has already been assigned to another step:
Generating certificates and keys ...
Cannot use for:
Generating certificates and keys ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0cb7cc19-7286-4a97-8308-58381918e74a
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-106808] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: d52e1474-92aa-4059-9d23-d43f49a61082
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21683"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 1b83eaa4-998f-4b77-a896-ecace08241d7
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 64c9f2aa-a9fb-431c-aa22-6e9071635d2b
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: bad10368-5638-4184-8fc1-13da7c871dc7
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 76e811f7-c73b-4a43-ab3b-266c014040fe
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9f2242aa-13e1-4a0a-b943-2d3abd9972a9
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 727bc84d-a347-477d-9a83-7ff0faba97b7
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: fccbe584-88aa-4fd3-be62-f356e0c1d209
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a3177b2d-641b-4b0b-9bba-0f30a050567d
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-106808\" primary control-plane node in \"json-output-106808\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 02819643-11ce-4bac-9c07-4c344730c9df
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759382731-21643 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3260e636-7d64-4ebc-b977-5cd0c7670045
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2d1f2814-8ee3-484a-afe3-7858c1412d51
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0d24d626-42ee-47e2-b6d2-5251811b894d
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2e185dda-1f80-4692-a650-3cbea535141e
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: f02bebe7-292c-4f5a-87a1-005095a93b15
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-106808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-106808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.737118ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00002309s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000056701s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000100381s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cri
o.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-schedul
er check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fa485dd8-03c1-4265-b0c0-b4702bd5f49e
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 18c921ee-905d-4bb4-9637-40fc754a473c
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a7d01580-1658-4cf3-a8a0-e5d736dd6ef9
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.000949561s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000068559s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00009778s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000226908s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WAR
NING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: c8106b99-4374-4e33-bd9d-9b86d6806435
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.000949561s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000068559s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00009778s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000226908s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNI
NG SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 2b6daa3d-64e5-4b73-894e-5213c488869f
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0cb7cc19-7286-4a97-8308-58381918e74a
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-106808] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: d52e1474-92aa-4059-9d23-d43f49a61082
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21683"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 1b83eaa4-998f-4b77-a896-ecace08241d7
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 64c9f2aa-a9fb-431c-aa22-6e9071635d2b
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: bad10368-5638-4184-8fc1-13da7c871dc7
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 76e811f7-c73b-4a43-ab3b-266c014040fe
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9f2242aa-13e1-4a0a-b943-2d3abd9972a9
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 727bc84d-a347-477d-9a83-7ff0faba97b7
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: fccbe584-88aa-4fd3-be62-f356e0c1d209
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: a3177b2d-641b-4b0b-9bba-0f30a050567d
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-106808\" primary control-plane node in \"json-output-106808\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 02819643-11ce-4bac-9c07-4c344730c9df
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759382731-21643 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3260e636-7d64-4ebc-b977-5cd0c7670045
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2d1f2814-8ee3-484a-afe3-7858c1412d51
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0d24d626-42ee-47e2-b6d2-5251811b894d
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 2e185dda-1f80-4692-a650-3cbea535141e
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: f02bebe7-292c-4f5a-87a1-005095a93b15
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-106808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-106808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.737118ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-apiserver is not healthy after 4m0.00002309s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000056701s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000100381s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cri
o.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-schedul
er check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: fa485dd8-03c1-4265-b0c0-b4702bd5f49e
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 18c921ee-905d-4bb4-9637-40fc754a473c
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: a7d01580-1658-4cf3-a8a0-e5d736dd6ef9
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.000949561s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000068559s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00009778s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000226908s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WAR
NING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: c8106b99-4374-4e33-bd9d-9b86d6806435
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.000949561s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000068559s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00009778s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000226908s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNI
NG SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 2b6daa3d-64e5-4b73-894e-5213c488869f
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMinikubeProfile (502.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-172874 --driver=docker  --container-runtime=crio
E1002 21:10:54.133384   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:15:54.132940   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p first-172874 --driver=docker  --container-runtime=crio: exit status 80 (8m19.128274533s)

                                                
                                                
-- stdout --
	* [first-172874] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "first-172874" primary control-plane node in "first-172874" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-172874 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-172874 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000959999s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000535835s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000522502s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000619897s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.758806ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001007937s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001292732s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001364241s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.758806ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001007937s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001292732s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001364241s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-linux-amd64 start -p first-172874 --driver=docker  --container-runtime=crio": exit status 80
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-02 21:17:26.591770877 +0000 UTC m=+5449.199052340
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect second-186100
helpers_test.go:239: (dbg) Non-zero exit: docker inspect second-186100: exit status 1 (28.169364ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-186100

                                                
                                                
** /stderr **
helpers_test.go:241: failed to get docker inspect: exit status 1
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p second-186100 -n second-186100
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p second-186100 -n second-186100: exit status 85 (52.762687ms)

                                                
                                                
-- stdout --
	* Profile "second-186100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-186100"

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 85 (may be ok)
helpers_test.go:249: "second-186100" host is not running, skipping log retrieval (state="* Profile \"second-186100\" not found. Run \"minikube profile list\" to view all profiles.")
helpers_test.go:175: Cleaning up "second-186100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-186100
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-02 21:17:26.807752579 +0000 UTC m=+5449.415034039
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect first-172874
helpers_test.go:243: (dbg) docker inspect first-172874:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "708f58043922f3fb83e6cd034bc49cb914be42dfa5f86a36f5abb7e54f3b7398",
	        "Created": "2025-10-02T21:09:12.687101286Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 117708,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:09:12.718369478Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/708f58043922f3fb83e6cd034bc49cb914be42dfa5f86a36f5abb7e54f3b7398/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/708f58043922f3fb83e6cd034bc49cb914be42dfa5f86a36f5abb7e54f3b7398/hostname",
	        "HostsPath": "/var/lib/docker/containers/708f58043922f3fb83e6cd034bc49cb914be42dfa5f86a36f5abb7e54f3b7398/hosts",
	        "LogPath": "/var/lib/docker/containers/708f58043922f3fb83e6cd034bc49cb914be42dfa5f86a36f5abb7e54f3b7398/708f58043922f3fb83e6cd034bc49cb914be42dfa5f86a36f5abb7e54f3b7398-json.log",
	        "Name": "/first-172874",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "first-172874:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "first-172874",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "708f58043922f3fb83e6cd034bc49cb914be42dfa5f86a36f5abb7e54f3b7398",
	                "LowerDir": "/var/lib/docker/overlay2/fed03d1594b389cb50a8f2ff30f84c5a53b3123d09cd0b434e0de7d9e25d074a-init/diff:/var/lib/docker/overlay2/cc99fcb0232c90d4e3344c8695f4278bb27e10a6241a6f8244bc5938f665cac3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fed03d1594b389cb50a8f2ff30f84c5a53b3123d09cd0b434e0de7d9e25d074a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fed03d1594b389cb50a8f2ff30f84c5a53b3123d09cd0b434e0de7d9e25d074a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fed03d1594b389cb50a8f2ff30f84c5a53b3123d09cd0b434e0de7d9e25d074a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "first-172874",
	                "Source": "/var/lib/docker/volumes/first-172874/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "first-172874",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "first-172874",
	                "name.minikube.sigs.k8s.io": "first-172874",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85c0db03b61031fe4f945369d969050065f0e16402fdd679e0443075dbb3debb",
	            "SandboxKey": "/var/run/docker/netns/85c0db03b610",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "first-172874": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:6f:2a:08:1f:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "46c87ac2695b75389491ed77c70d06835d54ebd1f37a062782cb719ce9c01a8d",
	                    "EndpointID": "67c3a479462df90b578f6938b716cccff7e45c929d189dec133e40b5c78a3c31",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "first-172874",
	                        "708f58043922"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p first-172874 -n first-172874
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p first-172874 -n first-172874: exit status 6 (283.386043ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:17:27.095946  122231 status.go:458] kubeconfig endpoint: get endpoint: "first-172874" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p first-172874 logs -n 25
helpers_test.go:260: TestMinikubeProfile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ ha-872795 node delete m03 --alsologtostderr -v 5                                                                        │ ha-872795                │ jenkins  │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ stop    │ ha-872795 stop --alsologtostderr -v 5                                                                                   │ ha-872795                │ jenkins  │ v1.37.0 │ 02 Oct 25 20:52 UTC │ 02 Oct 25 20:52 UTC │
	│ start   │ ha-872795 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                            │ ha-872795                │ jenkins  │ v1.37.0 │ 02 Oct 25 20:52 UTC │                     │
	│ node    │ ha-872795 node add --control-plane --alsologtostderr -v 5                                                               │ ha-872795                │ jenkins  │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ delete  │ -p ha-872795                                                                                                            │ ha-872795                │ jenkins  │ v1.37.0 │ 02 Oct 25 20:58 UTC │ 02 Oct 25 20:58 UTC │
	│ start   │ -p json-output-106808 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ json-output-106808       │ testUser │ v1.37.0 │ 02 Oct 25 20:58 UTC │                     │
	│ pause   │ -p json-output-106808 --output=json --user=testUser                                                                     │ json-output-106808       │ testUser │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ unpause │ -p json-output-106808 --output=json --user=testUser                                                                     │ json-output-106808       │ testUser │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ stop    │ -p json-output-106808 --output=json --user=testUser                                                                     │ json-output-106808       │ testUser │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ delete  │ -p json-output-106808                                                                                                   │ json-output-106808       │ jenkins  │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ start   │ -p json-output-error-768876 --memory=3072 --output=json --wait=true --driver=fail                                       │ json-output-error-768876 │ jenkins  │ v1.37.0 │ 02 Oct 25 21:06 UTC │                     │
	│ delete  │ -p json-output-error-768876                                                                                             │ json-output-error-768876 │ jenkins  │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:06 UTC │
	│ start   │ -p docker-network-966691 --network=                                                                                     │ docker-network-966691    │ jenkins  │ v1.37.0 │ 02 Oct 25 21:06 UTC │ 02 Oct 25 21:07 UTC │
	│ delete  │ -p docker-network-966691                                                                                                │ docker-network-966691    │ jenkins  │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ start   │ -p docker-network-441409 --network=bridge                                                                               │ docker-network-441409    │ jenkins  │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete  │ -p docker-network-441409                                                                                                │ docker-network-441409    │ jenkins  │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ start   │ -p existing-network-237966 --network=existing-network                                                                   │ existing-network-237966  │ jenkins  │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:08 UTC │
	│ delete  │ -p existing-network-237966                                                                                              │ existing-network-237966  │ jenkins  │ v1.37.0 │ 02 Oct 25 21:08 UTC │ 02 Oct 25 21:08 UTC │
	│ start   │ -p custom-subnet-485760 --subnet=192.168.60.0/24                                                                        │ custom-subnet-485760     │ jenkins  │ v1.37.0 │ 02 Oct 25 21:08 UTC │ 02 Oct 25 21:08 UTC │
	│ delete  │ -p custom-subnet-485760                                                                                                 │ custom-subnet-485760     │ jenkins  │ v1.37.0 │ 02 Oct 25 21:08 UTC │ 02 Oct 25 21:08 UTC │
	│ start   │ -p static-ip-114906 --static-ip=192.168.200.200                                                                         │ static-ip-114906         │ jenkins  │ v1.37.0 │ 02 Oct 25 21:08 UTC │ 02 Oct 25 21:09 UTC │
	│ ip      │ static-ip-114906 ip                                                                                                     │ static-ip-114906         │ jenkins  │ v1.37.0 │ 02 Oct 25 21:09 UTC │ 02 Oct 25 21:09 UTC │
	│ delete  │ -p static-ip-114906                                                                                                     │ static-ip-114906         │ jenkins  │ v1.37.0 │ 02 Oct 25 21:09 UTC │ 02 Oct 25 21:09 UTC │
	│ start   │ -p first-172874 --driver=docker  --container-runtime=crio                                                               │ first-172874             │ jenkins  │ v1.37.0 │ 02 Oct 25 21:09 UTC │                     │
	│ delete  │ -p second-186100                                                                                                        │ second-186100            │ jenkins  │ v1.37.0 │ 02 Oct 25 21:17 UTC │ 02 Oct 25 21:17 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:09:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:09:07.501939  117144 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:09:07.502168  117144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:07.502171  117144 out.go:374] Setting ErrFile to fd 2...
	I1002 21:09:07.502175  117144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:09:07.502388  117144 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 21:09:07.502922  117144 out.go:368] Setting JSON to false
	I1002 21:09:07.504716  117144 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6696,"bootTime":1759432651,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:09:07.504773  117144 start.go:140] virtualization: kvm guest
	I1002 21:09:07.506720  117144 out.go:179] * [first-172874] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:09:07.507964  117144 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 21:09:07.507971  117144 notify.go:221] Checking for updates...
	I1002 21:09:07.510671  117144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:09:07.511861  117144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 21:09:07.513371  117144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 21:09:07.514511  117144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:09:07.515672  117144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:09:07.517092  117144 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 21:09:07.539722  117144 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 21:09:07.539833  117144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:09:07.592277  117144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:09:07.582792345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:09:07.592369  117144 docker.go:319] overlay module found
	I1002 21:09:07.594200  117144 out.go:179] * Using the docker driver based on user configuration
	I1002 21:09:07.595490  117144 start.go:306] selected driver: docker
	I1002 21:09:07.595510  117144 start.go:936] validating driver "docker" against <nil>
	I1002 21:09:07.595520  117144 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:09:07.595635  117144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:09:07.651834  117144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:09:07.640746231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:09:07.652025  117144 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:09:07.652476  117144 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 21:09:07.652605  117144 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 21:09:07.654437  117144 out.go:179] * Using Docker driver with root privileges
	I1002 21:09:07.655734  117144 cni.go:84] Creating CNI manager for ""
	I1002 21:09:07.655786  117144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:09:07.655791  117144 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:09:07.655842  117144 start.go:350] cluster config:
	{Name:first-172874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-172874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:09:07.657099  117144 out.go:179] * Starting "first-172874" primary control-plane node in "first-172874" cluster
	I1002 21:09:07.658387  117144 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 21:09:07.659608  117144 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:09:07.660851  117144 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:09:07.660886  117144 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:09:07.660895  117144 cache.go:59] Caching tarball of preloaded images
	I1002 21:09:07.660951  117144 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:09:07.660982  117144 preload.go:233] Found /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:09:07.660988  117144 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:09:07.661294  117144 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/config.json ...
	I1002 21:09:07.661308  117144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/config.json: {Name:mkb280fc9e9b2a0544eda185278842323c903b20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:09:07.680809  117144 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:09:07.680818  117144 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:09:07.680835  117144 cache.go:233] Successfully downloaded all kic artifacts
	I1002 21:09:07.680859  117144 start.go:361] acquireMachinesLock for first-172874: {Name:mkeccb1d7abc574c3898cf7e85411d85529c79b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:09:07.680946  117144 start.go:365] duration metric: took 73.827µs to acquireMachinesLock for "first-172874"
	I1002 21:09:07.680963  117144 start.go:94] Provisioning new machine with config: &{Name:first-172874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-172874 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:09:07.681012  117144 start.go:126] createHost starting for "" (driver="docker")
	I1002 21:09:07.682781  117144 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1002 21:09:07.682978  117144 start.go:160] libmachine.API.Create for "first-172874" (driver="docker")
	I1002 21:09:07.682998  117144 client.go:168] LocalClient.Create starting
	I1002 21:09:07.683060  117144 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem
	I1002 21:09:07.683095  117144 main.go:141] libmachine: Decoding PEM data...
	I1002 21:09:07.683108  117144 main.go:141] libmachine: Parsing certificate...
	I1002 21:09:07.683161  117144 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem
	I1002 21:09:07.683181  117144 main.go:141] libmachine: Decoding PEM data...
	I1002 21:09:07.683188  117144 main.go:141] libmachine: Parsing certificate...
	I1002 21:09:07.683481  117144 cli_runner.go:164] Run: docker network inspect first-172874 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:09:07.700169  117144 cli_runner.go:211] docker network inspect first-172874 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:09:07.700225  117144 network_create.go:284] running [docker network inspect first-172874] to gather additional debugging logs...
	I1002 21:09:07.700249  117144 cli_runner.go:164] Run: docker network inspect first-172874
	W1002 21:09:07.717830  117144 cli_runner.go:211] docker network inspect first-172874 returned with exit code 1
	I1002 21:09:07.717849  117144 network_create.go:287] error running [docker network inspect first-172874]: docker network inspect first-172874: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network first-172874 not found
	I1002 21:09:07.717866  117144 network_create.go:289] output of [docker network inspect first-172874]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network first-172874 not found
	
	** /stderr **
	I1002 21:09:07.717944  117144 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:09:07.734730  117144 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2565cccb4106 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c6:52:cb:ec:71:09} reservation:<nil>}
	I1002 21:09:07.735106  117144 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001da4240}
	I1002 21:09:07.735123  117144 network_create.go:124] attempt to create docker network first-172874 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1002 21:09:07.735158  117144 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-172874 first-172874
	I1002 21:09:07.790187  117144 network_create.go:108] docker network first-172874 192.168.58.0/24 created
	I1002 21:09:07.790212  117144 kic.go:121] calculated static IP "192.168.58.2" for the "first-172874" container
	I1002 21:09:07.790272  117144 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:09:07.806737  117144 cli_runner.go:164] Run: docker volume create first-172874 --label name.minikube.sigs.k8s.io=first-172874 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:09:07.823807  117144 oci.go:103] Successfully created a docker volume first-172874
	I1002 21:09:07.823891  117144 cli_runner.go:164] Run: docker run --rm --name first-172874-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-172874 --entrypoint /usr/bin/test -v first-172874:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:09:08.174482  117144 oci.go:107] Successfully prepared a docker volume first-172874
	I1002 21:09:08.174506  117144 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:09:08.174525  117144 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:09:08.174596  117144 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-172874:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:09:12.620638  117144 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-172874:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.445968708s)
	I1002 21:09:12.620689  117144 kic.go:203] duration metric: took 4.446159579s to extract preloaded images to volume ...
	W1002 21:09:12.620782  117144 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:09:12.620818  117144 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:09:12.620855  117144 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:09:12.671774  117144 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname first-172874 --name first-172874 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-172874 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=first-172874 --network first-172874 --ip 192.168.58.2 --volume first-172874:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:09:12.917621  117144 cli_runner.go:164] Run: docker container inspect first-172874 --format={{.State.Running}}
	I1002 21:09:12.935937  117144 cli_runner.go:164] Run: docker container inspect first-172874 --format={{.State.Status}}
	I1002 21:09:12.954597  117144 cli_runner.go:164] Run: docker exec first-172874 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:09:12.997505  117144 oci.go:144] the created container "first-172874" has a running status.
	I1002 21:09:12.997526  117144 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/first-172874/id_rsa...
	I1002 21:09:13.679758  117144 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21683-9327/.minikube/machines/first-172874/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:09:13.703061  117144 cli_runner.go:164] Run: docker container inspect first-172874 --format={{.State.Status}}
	I1002 21:09:13.719486  117144 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:09:13.719498  117144 kic_runner.go:114] Args: [docker exec --privileged first-172874 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:09:13.764231  117144 cli_runner.go:164] Run: docker container inspect first-172874 --format={{.State.Status}}
	I1002 21:09:13.780610  117144 machine.go:93] provisionDockerMachine start ...
	I1002 21:09:13.780720  117144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-172874
	I1002 21:09:13.797349  117144 main.go:141] libmachine: Using SSH client type: native
	I1002 21:09:13.797583  117144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 21:09:13.797590  117144 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:09:13.939747  117144 main.go:141] libmachine: SSH cmd err, output: <nil>: first-172874
	
	I1002 21:09:13.939762  117144 ubuntu.go:182] provisioning hostname "first-172874"
	I1002 21:09:13.939815  117144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-172874
	I1002 21:09:13.956952  117144 main.go:141] libmachine: Using SSH client type: native
	I1002 21:09:13.957227  117144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 21:09:13.957246  117144 main.go:141] libmachine: About to run SSH command:
	sudo hostname first-172874 && echo "first-172874" | sudo tee /etc/hostname
	I1002 21:09:14.109167  117144 main.go:141] libmachine: SSH cmd err, output: <nil>: first-172874
	
	I1002 21:09:14.109243  117144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-172874
	I1002 21:09:14.126782  117144 main.go:141] libmachine: Using SSH client type: native
	I1002 21:09:14.126983  117144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 21:09:14.126993  117144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfirst-172874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 first-172874/g' /etc/hosts;
				else 
					echo '127.0.1.1 first-172874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:09:14.269151  117144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:09:14.269177  117144 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21683-9327/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-9327/.minikube}
	I1002 21:09:14.269208  117144 ubuntu.go:190] setting up certificates
	I1002 21:09:14.269218  117144 provision.go:84] configureAuth start
	I1002 21:09:14.269262  117144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-172874
	I1002 21:09:14.286251  117144 provision.go:143] copyHostCerts
	I1002 21:09:14.286303  117144 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem, removing ...
	I1002 21:09:14.286311  117144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem
	I1002 21:09:14.286407  117144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/ca.pem (1082 bytes)
	I1002 21:09:14.286525  117144 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem, removing ...
	I1002 21:09:14.286530  117144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem
	I1002 21:09:14.286568  117144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/cert.pem (1123 bytes)
	I1002 21:09:14.286776  117144 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem, removing ...
	I1002 21:09:14.286787  117144 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem
	I1002 21:09:14.286839  117144 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-9327/.minikube/key.pem (1679 bytes)
	I1002 21:09:14.286912  117144 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem org=jenkins.first-172874 san=[127.0.0.1 192.168.58.2 first-172874 localhost minikube]
	I1002 21:09:14.453166  117144 provision.go:177] copyRemoteCerts
	I1002 21:09:14.453212  117144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:09:14.453245  117144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-172874
	I1002 21:09:14.470518  117144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/first-172874/id_rsa Username:docker}
	I1002 21:09:14.571832  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:09:14.590917  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1002 21:09:14.607991  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:09:14.624949  117144 provision.go:87] duration metric: took 355.718641ms to configureAuth
	I1002 21:09:14.624970  117144 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:09:14.625130  117144 config.go:182] Loaded profile config "first-172874": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:09:14.625222  117144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-172874
	I1002 21:09:14.642540  117144 main.go:141] libmachine: Using SSH client type: native
	I1002 21:09:14.642793  117144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 21:09:14.642803  117144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:09:14.892020  117144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:09:14.892036  117144 machine.go:96] duration metric: took 1.111414378s to provisionDockerMachine
	I1002 21:09:14.892046  117144 client.go:171] duration metric: took 7.209042784s to LocalClient.Create
	I1002 21:09:14.892068  117144 start.go:168] duration metric: took 7.209089384s to libmachine.API.Create "first-172874"
	I1002 21:09:14.892075  117144 start.go:294] postStartSetup for "first-172874" (driver="docker")
	I1002 21:09:14.892086  117144 start.go:323] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:09:14.892145  117144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:09:14.892184  117144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-172874
	I1002 21:09:14.909757  117144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/first-172874/id_rsa Username:docker}
	I1002 21:09:15.013287  117144 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:09:15.016748  117144 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:09:15.016764  117144 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:09:15.016771  117144 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/addons for local assets ...
	I1002 21:09:15.016812  117144 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-9327/.minikube/files for local assets ...
	I1002 21:09:15.016876  117144 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem -> 128512.pem in /etc/ssl/certs
	I1002 21:09:15.016955  117144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:09:15.024414  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /etc/ssl/certs/128512.pem (1708 bytes)
	I1002 21:09:15.043144  117144 start.go:297] duration metric: took 151.055769ms for postStartSetup
	I1002 21:09:15.043532  117144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-172874
	I1002 21:09:15.061593  117144 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/config.json ...
	I1002 21:09:15.061885  117144 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:09:15.061922  117144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-172874
	I1002 21:09:15.079352  117144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/first-172874/id_rsa Username:docker}
	I1002 21:09:15.176548  117144 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:09:15.181014  117144 start.go:129] duration metric: took 7.499991549s to createHost
	I1002 21:09:15.181028  117144 start.go:84] releasing machines lock for "first-172874", held for 7.500075635s
	I1002 21:09:15.181081  117144 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-172874
	I1002 21:09:15.198703  117144 ssh_runner.go:195] Run: cat /version.json
	I1002 21:09:15.198738  117144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-172874
	I1002 21:09:15.198737  117144 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:09:15.198783  117144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-172874
	I1002 21:09:15.216516  117144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/first-172874/id_rsa Username:docker}
	I1002 21:09:15.216754  117144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/first-172874/id_rsa Username:docker}
	I1002 21:09:15.364091  117144 ssh_runner.go:195] Run: systemctl --version
	I1002 21:09:15.370326  117144 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:09:15.406181  117144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:09:15.410527  117144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:09:15.410571  117144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:09:15.435345  117144 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:09:15.435358  117144 start.go:496] detecting cgroup driver to use...
	I1002 21:09:15.435388  117144 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:09:15.435434  117144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:09:15.450286  117144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:09:15.461759  117144 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:09:15.461798  117144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:09:15.478285  117144 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:09:15.494742  117144 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:09:15.579983  117144 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:09:15.668786  117144 docker.go:234] disabling docker service ...
	I1002 21:09:15.668832  117144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:09:15.686401  117144 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:09:15.698191  117144 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:09:15.778268  117144 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:09:15.854633  117144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:09:15.866692  117144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:09:15.880145  117144 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:09:15.880203  117144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:09:15.889980  117144 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:09:15.890021  117144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:09:15.898213  117144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:09:15.906569  117144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:09:15.914642  117144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:09:15.922570  117144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:09:15.930943  117144 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:09:15.944830  117144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:09:15.953609  117144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:09:15.960585  117144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:09:15.967562  117144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:09:16.042864  117144 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:09:16.148295  117144 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:09:16.148342  117144 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:09:16.152142  117144 start.go:564] Will wait 60s for crictl version
	I1002 21:09:16.152193  117144 ssh_runner.go:195] Run: which crictl
	I1002 21:09:16.155498  117144 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:09:16.179723  117144 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:09:16.179782  117144 ssh_runner.go:195] Run: crio --version
	I1002 21:09:16.206616  117144 ssh_runner.go:195] Run: crio --version
	I1002 21:09:16.234644  117144 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:09:16.236048  117144 cli_runner.go:164] Run: docker network inspect first-172874 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:09:16.253193  117144 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 21:09:16.257124  117144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:09:16.266921  117144 kubeadm.go:883] updating cluster {Name:first-172874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-172874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s} ...
	I1002 21:09:16.267063  117144 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:09:16.267117  117144 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:09:16.297450  117144 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:09:16.297460  117144 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:09:16.297499  117144 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:09:16.321734  117144 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:09:16.321744  117144 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:09:16.321749  117144 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.34.1 crio true true} ...
	I1002 21:09:16.321831  117144 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=first-172874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:first-172874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:09:16.321893  117144 ssh_runner.go:195] Run: crio config
	I1002 21:09:16.365798  117144 cni.go:84] Creating CNI manager for ""
	I1002 21:09:16.365812  117144 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:09:16.365827  117144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:09:16.365845  117144 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:first-172874 NodeName:first-172874 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:09:16.365968  117144 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "first-172874"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.58.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:09:16.366022  117144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:09:16.373907  117144 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:09:16.373953  117144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:09:16.381194  117144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 21:09:16.393235  117144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:09:16.407964  117144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1002 21:09:16.419888  117144 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:09:16.423229  117144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:09:16.432532  117144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:09:16.509108  117144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:09:16.534136  117144 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874 for IP: 192.168.58.2
	I1002 21:09:16.534149  117144 certs.go:195] generating shared ca certs ...
	I1002 21:09:16.534168  117144 certs.go:227] acquiring lock for ca certs: {Name:mk51d7ca06e943b86909f2e3a4140d85edda0d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:09:16.534330  117144 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key
	I1002 21:09:16.534373  117144 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key
	I1002 21:09:16.534380  117144 certs.go:257] generating profile certs ...
	I1002 21:09:16.534450  117144 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/client.key
	I1002 21:09:16.534468  117144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/client.crt with IP's: []
	I1002 21:09:16.593618  117144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/client.crt ...
	I1002 21:09:16.593634  117144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/client.crt: {Name:mk47974e569797e47e7d405e68160abd83e32c68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:09:16.593870  117144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/client.key ...
	I1002 21:09:16.593881  117144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/client.key: {Name:mk2861333661789e17daff5f74fbd56ca33433d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:09:16.593995  117144 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.key.06174947
	I1002 21:09:16.594008  117144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.crt.06174947 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1002 21:09:17.038022  117144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.crt.06174947 ...
	I1002 21:09:17.038039  117144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.crt.06174947: {Name:mk6d9e658815590354b0cf34ff5b6cf5b8d8e64d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:09:17.038239  117144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.key.06174947 ...
	I1002 21:09:17.038251  117144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.key.06174947: {Name:mk554b00a0a0e3182e3f8901d283703e7af4a1b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:09:17.038401  117144 certs.go:382] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.crt.06174947 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.crt
	I1002 21:09:17.038491  117144 certs.go:386] copying /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.key.06174947 -> /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.key
	I1002 21:09:17.038545  117144 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/proxy-client.key
	I1002 21:09:17.038555  117144 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/proxy-client.crt with IP's: []
	I1002 21:09:17.600479  117144 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/proxy-client.crt ...
	I1002 21:09:17.600493  117144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/proxy-client.crt: {Name:mkfa29909aa2d5efe1f29540e27cf0447e3506e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:09:17.600679  117144 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/proxy-client.key ...
	I1002 21:09:17.600687  117144 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/proxy-client.key: {Name:mkee699d1f52ac3081680174599984da2be85f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:09:17.600871  117144 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem (1338 bytes)
	W1002 21:09:17.600902  117144 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851_empty.pem, impossibly tiny 0 bytes
	I1002 21:09:17.600908  117144 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:09:17.600929  117144 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:09:17.600946  117144 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:09:17.600966  117144 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/certs/key.pem (1679 bytes)
	I1002 21:09:17.600997  117144 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem (1708 bytes)
	I1002 21:09:17.601523  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:09:17.619051  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:09:17.635686  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:09:17.652560  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:09:17.669268  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 21:09:17.686114  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:09:17.702071  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:09:17.718082  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/first-172874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:09:17.734088  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:09:17.752083  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/certs/12851.pem --> /usr/share/ca-certificates/12851.pem (1338 bytes)
	I1002 21:09:17.767951  117144 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/ssl/certs/128512.pem --> /usr/share/ca-certificates/128512.pem (1708 bytes)
	I1002 21:09:17.784084  117144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:09:17.795772  117144 ssh_runner.go:195] Run: openssl version
	I1002 21:09:17.801663  117144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:09:17.810291  117144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:09:17.813787  117144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:09:17.813832  117144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:09:17.847341  117144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:09:17.855759  117144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12851.pem && ln -fs /usr/share/ca-certificates/12851.pem /etc/ssl/certs/12851.pem"
	I1002 21:09:17.863476  117144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12851.pem
	I1002 21:09:17.866988  117144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:03 /usr/share/ca-certificates/12851.pem
	I1002 21:09:17.867023  117144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12851.pem
	I1002 21:09:17.900339  117144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12851.pem /etc/ssl/certs/51391683.0"
	I1002 21:09:17.909244  117144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128512.pem && ln -fs /usr/share/ca-certificates/128512.pem /etc/ssl/certs/128512.pem"
	I1002 21:09:17.917855  117144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128512.pem
	I1002 21:09:17.921732  117144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:03 /usr/share/ca-certificates/128512.pem
	I1002 21:09:17.921767  117144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128512.pem
	I1002 21:09:17.956941  117144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128512.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:09:17.967122  117144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:09:17.971092  117144 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:09:17.971131  117144 kubeadm.go:400] StartCluster: {Name:first-172874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-172874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I1002 21:09:17.971194  117144 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:09:17.971231  117144 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:09:18.000431  117144 cri.go:89] found id: ""
	I1002 21:09:18.000483  117144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:09:18.008600  117144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:09:18.016204  117144 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:09:18.016250  117144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:09:18.023962  117144 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:09:18.023972  117144 kubeadm.go:157] found existing configuration files:
	
	I1002 21:09:18.024011  117144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:09:18.031377  117144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:09:18.031428  117144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:09:18.038563  117144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:09:18.046094  117144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:09:18.046145  117144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:09:18.053242  117144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:09:18.060641  117144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:09:18.060711  117144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:09:18.068446  117144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:09:18.076073  117144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:09:18.076122  117144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:09:18.083355  117144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:09:18.140454  117144 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:09:18.194983  117144 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:13:23.309241  117144 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:13:23.309372  117144 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:13:23.311474  117144 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:13:23.311554  117144 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:13:23.311697  117144 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:13:23.311743  117144 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:13:23.311791  117144 kubeadm.go:318] OS: Linux
	I1002 21:13:23.311855  117144 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:13:23.311928  117144 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:13:23.311977  117144 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:13:23.312042  117144 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:13:23.312088  117144 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:13:23.312135  117144 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:13:23.312189  117144 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:13:23.312225  117144 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:13:23.312293  117144 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:13:23.312377  117144 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:13:23.312447  117144 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:13:23.312494  117144 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:13:23.314996  117144 out.go:252]   - Generating certificates and keys ...
	I1002 21:13:23.315074  117144 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:13:23.315143  117144 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:13:23.315201  117144 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:13:23.315263  117144 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:13:23.315340  117144 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:13:23.315407  117144 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:13:23.315478  117144 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:13:23.315603  117144 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [first-172874 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 21:13:23.315694  117144 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:13:23.315795  117144 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [first-172874 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 21:13:23.315879  117144 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:13:23.315970  117144 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:13:23.316031  117144 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:13:23.316080  117144 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:13:23.316123  117144 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:13:23.316197  117144 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:13:23.316272  117144 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:13:23.316339  117144 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:13:23.316381  117144 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:13:23.316443  117144 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:13:23.316496  117144 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:13:23.318858  117144 out.go:252]   - Booting up control plane ...
	I1002 21:13:23.318930  117144 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:13:23.319017  117144 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:13:23.319074  117144 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:13:23.319173  117144 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:13:23.319257  117144 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:13:23.319379  117144 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:13:23.319455  117144 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:13:23.319486  117144 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:13:23.319588  117144 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:13:23.319690  117144 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:13:23.319734  117144 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000959999s
	I1002 21:13:23.319804  117144 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:13:23.319875  117144 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1002 21:13:23.319962  117144 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:13:23.320025  117144 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:13:23.320079  117144 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000535835s
	I1002 21:13:23.320140  117144 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000522502s
	I1002 21:13:23.320200  117144 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000619897s
	I1002 21:13:23.320203  117144 kubeadm.go:318] 
	I1002 21:13:23.320270  117144 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:13:23.320331  117144 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:13:23.320396  117144 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:13:23.320476  117144 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:13:23.320556  117144 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:13:23.320620  117144 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:13:23.320664  117144 kubeadm.go:318] 
	W1002 21:13:23.320780  117144 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-172874 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-172874 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000959999s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000535835s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000522502s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000619897s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:13:23.320890  117144 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:13:23.764060  117144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:13:23.776447  117144 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:13:23.776504  117144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:13:23.784106  117144 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:13:23.784115  117144 kubeadm.go:157] found existing configuration files:
	
	I1002 21:13:23.784155  117144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:13:23.791561  117144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:13:23.791607  117144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:13:23.798609  117144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:13:23.805795  117144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:13:23.805841  117144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:13:23.813053  117144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:13:23.820176  117144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:13:23.820211  117144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:13:23.827079  117144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:13:23.834023  117144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:13:23.834064  117144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:13:23.840710  117144 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:13:23.875689  117144 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:13:23.875737  117144 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:13:23.894061  117144 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:13:23.894147  117144 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:13:23.894182  117144 kubeadm.go:318] OS: Linux
	I1002 21:13:23.894253  117144 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:13:23.894290  117144 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:13:23.894326  117144 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:13:23.894362  117144 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:13:23.894398  117144 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:13:23.894433  117144 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:13:23.894469  117144 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:13:23.894548  117144 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:13:23.951341  117144 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:13:23.951473  117144 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:13:23.951591  117144 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:13:23.957300  117144 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:13:23.960820  117144 out.go:252]   - Generating certificates and keys ...
	I1002 21:13:23.960885  117144 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:13:23.960989  117144 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:13:23.961061  117144 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:13:23.961112  117144 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:13:23.961175  117144 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:13:23.961221  117144 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:13:23.961272  117144 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:13:23.961328  117144 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:13:23.961396  117144 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:13:23.961454  117144 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:13:23.961482  117144 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:13:23.961525  117144 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:13:24.132908  117144 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:13:24.800245  117144 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:13:25.150984  117144 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:13:25.417774  117144 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:13:25.534981  117144 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:13:25.535422  117144 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:13:25.537558  117144 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:13:25.540567  117144 out.go:252]   - Booting up control plane ...
	I1002 21:13:25.540657  117144 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:13:25.540747  117144 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:13:25.540842  117144 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:13:25.553223  117144 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:13:25.553302  117144 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:13:25.559656  117144 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:13:25.559843  117144 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:13:25.559912  117144 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:13:25.662389  117144 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:13:25.662561  117144 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:13:26.164010  117144 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.758806ms
	I1002 21:13:26.167910  117144 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:13:26.168028  117144 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1002 21:13:26.168156  117144 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:13:26.168256  117144 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:17:26.169106  117144 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001007937s
	I1002 21:17:26.169246  117144 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001292732s
	I1002 21:17:26.169352  117144 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001364241s
	I1002 21:17:26.169359  117144 kubeadm.go:318] 
	I1002 21:17:26.169431  117144 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:17:26.169515  117144 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:17:26.169684  117144 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:17:26.169805  117144 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:17:26.169875  117144 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:17:26.169989  117144 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:17:26.169994  117144 kubeadm.go:318] 
	I1002 21:17:26.173016  117144 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:17:26.173107  117144 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:17:26.173694  117144 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:17:26.173774  117144 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:17:26.173867  117144 kubeadm.go:402] duration metric: took 8m8.20273107s to StartCluster
	I1002 21:17:26.173923  117144 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:17:26.173985  117144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:17:26.199850  117144 cri.go:89] found id: ""
	I1002 21:17:26.199875  117144 logs.go:282] 0 containers: []
	W1002 21:17:26.199884  117144 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:17:26.199891  117144 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:17:26.199958  117144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:17:26.225725  117144 cri.go:89] found id: ""
	I1002 21:17:26.225738  117144 logs.go:282] 0 containers: []
	W1002 21:17:26.225744  117144 logs.go:284] No container was found matching "etcd"
	I1002 21:17:26.225749  117144 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:17:26.225793  117144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:17:26.249688  117144 cri.go:89] found id: ""
	I1002 21:17:26.249702  117144 logs.go:282] 0 containers: []
	W1002 21:17:26.249708  117144 logs.go:284] No container was found matching "coredns"
	I1002 21:17:26.249713  117144 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:17:26.249754  117144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:17:26.274580  117144 cri.go:89] found id: ""
	I1002 21:17:26.274610  117144 logs.go:282] 0 containers: []
	W1002 21:17:26.274617  117144 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:17:26.274621  117144 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:17:26.274695  117144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:17:26.299100  117144 cri.go:89] found id: ""
	I1002 21:17:26.299115  117144 logs.go:282] 0 containers: []
	W1002 21:17:26.299123  117144 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:17:26.299129  117144 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:17:26.299188  117144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:17:26.322492  117144 cri.go:89] found id: ""
	I1002 21:17:26.322511  117144 logs.go:282] 0 containers: []
	W1002 21:17:26.322520  117144 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:17:26.322527  117144 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:17:26.322580  117144 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:17:26.347996  117144 cri.go:89] found id: ""
	I1002 21:17:26.348013  117144 logs.go:282] 0 containers: []
	W1002 21:17:26.348019  117144 logs.go:284] No container was found matching "kindnet"
	I1002 21:17:26.348028  117144 logs.go:123] Gathering logs for kubelet ...
	I1002 21:17:26.348041  117144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:17:26.417681  117144 logs.go:123] Gathering logs for dmesg ...
	I1002 21:17:26.417697  117144 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:17:26.428990  117144 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:17:26.429005  117144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:17:26.484515  117144 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:17:26.477801    2420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:26.478307    2420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:26.479830    2420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:26.480272    2420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:26.481784    2420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:17:26.477801    2420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:26.478307    2420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:26.479830    2420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:26.480272    2420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:26.481784    2420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:17:26.484525  117144 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:17:26.484535  117144 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:17:26.547751  117144 logs.go:123] Gathering logs for container status ...
	I1002 21:17:26.547769  117144 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 21:17:26.575363  117144 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.758806ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001007937s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001292732s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001364241s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:17:26.575418  117144 out.go:285] * 
	W1002 21:17:26.575490  117144 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.758806ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001007937s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001292732s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001364241s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:17:26.575508  117144 out.go:285] * 
	W1002 21:17:26.577398  117144 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:17:26.580919  117144 out.go:203] 
	W1002 21:17:26.582132  117144 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.758806ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001007937s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001292732s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001364241s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:17:26.582157  117144 out.go:285] * 
	I1002 21:17:26.584106  117144 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:17:22 first-172874 crio[773]: time="2025-10-02T21:17:22.149563571Z" level=info msg="createCtr: removing container 3b5d8227e026fb0148fde1c47ea9069575fbc8cf01e9d213906ea84f53be6d92" id=951b57cf-2bef-431f-88dc-89a5d2865052 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:22 first-172874 crio[773]: time="2025-10-02T21:17:22.149590016Z" level=info msg="createCtr: deleting container 3b5d8227e026fb0148fde1c47ea9069575fbc8cf01e9d213906ea84f53be6d92 from storage" id=951b57cf-2bef-431f-88dc-89a5d2865052 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:22 first-172874 crio[773]: time="2025-10-02T21:17:22.151526106Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-first-172874_kube-system_9777f526e8fd4bcef00170d28ebdd139_0" id=951b57cf-2bef-431f-88dc-89a5d2865052 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.126147705Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=7624db5f-57ce-4b5c-8129-9487993c0d11 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.127038837Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=a5eb71e9-0eb9-4c82-896c-7ad851d7c1b4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.127875164Z" level=info msg="Creating container: kube-system/kube-apiserver-first-172874/kube-apiserver" id=c47bbfb0-6143-40d9-93b5-191cbc78c0d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.128132036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.131275788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.13176287Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.149353029Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c47bbfb0-6143-40d9-93b5-191cbc78c0d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.150607569Z" level=info msg="createCtr: deleting container ID edd9942494c6abb20c8c3bf585df323a61abdfd03ccb754dca42886398222a26 from idIndex" id=c47bbfb0-6143-40d9-93b5-191cbc78c0d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.150637735Z" level=info msg="createCtr: removing container edd9942494c6abb20c8c3bf585df323a61abdfd03ccb754dca42886398222a26" id=c47bbfb0-6143-40d9-93b5-191cbc78c0d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.150687042Z" level=info msg="createCtr: deleting container edd9942494c6abb20c8c3bf585df323a61abdfd03ccb754dca42886398222a26 from storage" id=c47bbfb0-6143-40d9-93b5-191cbc78c0d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:23 first-172874 crio[773]: time="2025-10-02T21:17:23.152668958Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-first-172874_kube-system_13e2ca0cc1d2d001e9728d89ef8f83a4_0" id=c47bbfb0-6143-40d9-93b5-191cbc78c0d7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.125473464Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f3c85692-755b-46db-b8eb-4b88350b6af0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.126279596Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=a4a5896b-a68f-45cc-99a0-8d0bb566e026 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.12713466Z" level=info msg="Creating container: kube-system/etcd-first-172874/etcd" id=56e8c7d2-5d1a-456c-b5cd-a4b3ec9e9f72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.127363015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.130812139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.131212523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.147526664Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=56e8c7d2-5d1a-456c-b5cd-a4b3ec9e9f72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.148837572Z" level=info msg="createCtr: deleting container ID ebcb84561a51932c9e48360a2ea18c058be98b1459310923be319a9a8513d42f from idIndex" id=56e8c7d2-5d1a-456c-b5cd-a4b3ec9e9f72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.148868677Z" level=info msg="createCtr: removing container ebcb84561a51932c9e48360a2ea18c058be98b1459310923be319a9a8513d42f" id=56e8c7d2-5d1a-456c-b5cd-a4b3ec9e9f72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.148896051Z" level=info msg="createCtr: deleting container ebcb84561a51932c9e48360a2ea18c058be98b1459310923be319a9a8513d42f from storage" id=56e8c7d2-5d1a-456c-b5cd-a4b3ec9e9f72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:17:26 first-172874 crio[773]: time="2025-10-02T21:17:26.150807219Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-first-172874_kube-system_df043a60ecefbe2f9136f424b9032256_0" id=56e8c7d2-5d1a-456c-b5cd-a4b3ec9e9f72 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:17:27.645520    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:27.646074    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:27.647701    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:27.648129    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:17:27.649726    2568 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 19:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001885] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.393202] i8042: Warning: Keylock active
	[  +0.014868] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004074] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000918] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000777] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000928] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000948] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000882] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000907] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000959] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000952] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.481151] block sda: the capability attribute has been deprecated.
	[  +0.084222] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023627] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.300016] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:17:27 up  1:59,  0 user,  load average: 0.00, 0.16, 0.20
	Linux first-172874 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:17:22 first-172874 kubelet[1804]:  > podSandboxID="0c448f08a88b3eb489a78aabf7c6ece740cf6bb811572c4b380f5d5b5c7768b7"
	Oct 02 21:17:22 first-172874 kubelet[1804]: E1002 21:17:22.151931    1804 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:17:22 first-172874 kubelet[1804]:         container kube-scheduler start failed in pod kube-scheduler-first-172874_kube-system(9777f526e8fd4bcef00170d28ebdd139): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:17:22 first-172874 kubelet[1804]:  > logger="UnhandledError"
	Oct 02 21:17:22 first-172874 kubelet[1804]: E1002 21:17:22.151964    1804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-first-172874" podUID="9777f526e8fd4bcef00170d28ebdd139"
	Oct 02 21:17:22 first-172874 kubelet[1804]: E1002 21:17:22.750604    1804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.58.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/first-172874?timeout=10s\": dial tcp 192.168.58.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:17:22 first-172874 kubelet[1804]: I1002 21:17:22.900362    1804 kubelet_node_status.go:75] "Attempting to register node" node="first-172874"
	Oct 02 21:17:22 first-172874 kubelet[1804]: E1002 21:17:22.900759    1804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.58.2:8443/api/v1/nodes\": dial tcp 192.168.58.2:8443: connect: connection refused" node="first-172874"
	Oct 02 21:17:23 first-172874 kubelet[1804]: E1002 21:17:23.125665    1804 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-172874\" not found" node="first-172874"
	Oct 02 21:17:23 first-172874 kubelet[1804]: E1002 21:17:23.152906    1804 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:17:23 first-172874 kubelet[1804]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:17:23 first-172874 kubelet[1804]:  > podSandboxID="358c4b1797f2c8e8348ff4b119ae0e1722b3616f5c6831ffe6568c9a424d293d"
	Oct 02 21:17:23 first-172874 kubelet[1804]: E1002 21:17:23.153008    1804 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:17:23 first-172874 kubelet[1804]:         container kube-apiserver start failed in pod kube-apiserver-first-172874_kube-system(13e2ca0cc1d2d001e9728d89ef8f83a4): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:17:23 first-172874 kubelet[1804]:  > logger="UnhandledError"
	Oct 02 21:17:23 first-172874 kubelet[1804]: E1002 21:17:23.153046    1804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-first-172874" podUID="13e2ca0cc1d2d001e9728d89ef8f83a4"
	Oct 02 21:17:26 first-172874 kubelet[1804]: E1002 21:17:26.125124    1804 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-172874\" not found" node="first-172874"
	Oct 02 21:17:26 first-172874 kubelet[1804]: E1002 21:17:26.137987    1804 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"first-172874\" not found"
	Oct 02 21:17:26 first-172874 kubelet[1804]: E1002 21:17:26.151046    1804 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:17:26 first-172874 kubelet[1804]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:17:26 first-172874 kubelet[1804]:  > podSandboxID="3df721b42a404a4df9efb24553c48333da656213205575e4907380342abd7911"
	Oct 02 21:17:26 first-172874 kubelet[1804]: E1002 21:17:26.151128    1804 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:17:26 first-172874 kubelet[1804]:         container etcd start failed in pod etcd-first-172874_kube-system(df043a60ecefbe2f9136f424b9032256): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:17:26 first-172874 kubelet[1804]:  > logger="UnhandledError"
	Oct 02 21:17:26 first-172874 kubelet[1804]: E1002 21:17:26.151154    1804 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-first-172874" podUID="df043a60ecefbe2f9136f424b9032256"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p first-172874 -n first-172874
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p first-172874 -n first-172874: exit status 6 (281.836885ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:17:28.000964  122555 status.go:458] kubeconfig endpoint: get endpoint: "first-172874" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "first-172874" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "first-172874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-172874
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-172874: (1.867027542s)
--- FAIL: TestMinikubeProfile (502.42s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (7200.051s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-688188
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-688188-m01 --driver=docker  --container-runtime=crio
E1002 21:42:17.212625   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:45:54.124386   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestMultiNode (28m44s)
		TestMultiNode/serial (28m44s)
		TestMultiNode/serial/ValidateNameConflict (5m3s)

                                                
                                                
goroutine 2114 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 29 minutes]:
testing.(*T).Run(0xc0005028c0, {0x32034db?, 0xc0009b1a88?}, 0x3c51e10)
	/usr/local/go/src/testing/testing.go:1859 +0x431
testing.runTests.func1(0xc0005028c0)
	/usr/local/go/src/testing/testing.go:2279 +0x37
testing.tRunner(0xc0005028c0, 0xc0009b1bc8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
testing.runTests(0xc000648150, {0x5c616c0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc00054fa00?, 0x5c89dc0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc00084cb40)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00084cb40)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xdb
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 123 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc0005836c0)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0005836c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestOffline(0xc0005836c0)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc0005836c0, 0x3c51e28)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 537 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fae230, 0xc0001101c0}, 0xc0017a6f50, 0xc0017a6f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3fae230, 0xc0001101c0}, 0x0?, 0xc0017a6f50, 0xc0017a6f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fae230?, 0xc0001101c0?}, 0xc001791c00?, 0x55d160?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593245?, 0xc0005b6300?, 0xc001784000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 569
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 2085 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x798b0563b7e0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0x3f635e0?, 0x5aa8b58?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001528240, {0xc0014c5746, 0x8ba, 0x8ba})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00011c2c8, {0xc0014c5746?, 0x41835f?, 0x2c42f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00077e270, {0x3f63640, 0xc0003a6150})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f637c0, 0xc00077e270}, {0x3f63640, 0xc0003a6150}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00011c2c8?, {0x3f637c0, 0xc00077e270})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00011c2c8, {0x3f637c0, 0xc00077e270})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f637c0, 0xc00077e270}, {0x3f636c0, 0xc00011c2c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0007b2400?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2101
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 154 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc001592000)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001592000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc001592000)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:36 +0x87
testing.tRunner(0xc001592000, 0x3c51d28)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 1864 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001db2540, {0x3218126?, 0x40962a4?}, 0xc0007b2000)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode.func1(0xc001db2540)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:84 +0x17d
testing.tRunner(0xc001db2540, 0xc0009b9140)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1831
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 569 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc0016aa3c0, 0xc0001101c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 567
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 568 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3fc0920, {{0x3fb5948, 0xc00022a340?}, 0xc000a81260?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 567
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 155 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc001592380)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001592380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc001592380)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc001592380, 0x3c51d20)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 157 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc001593340)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001593340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc001593340)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:83 +0x87
testing.tRunner(0xc001593340, 0x3c51d70)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 158 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc001593500)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001593500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc001593500)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:146 +0x87
testing.tRunner(0xc001593500, 0x3c51d68)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 160 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc001593c00)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001593c00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestKVMDriverInstallOrUpdate(0xc001593c00)
	/home/jenkins/workspace/Build_Cross/test/integration/driver_install_or_update_test.go:48 +0x87
testing.tRunner(0xc001593c00, 0x3c51db8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 1831 [chan receive, 29 minutes]:
testing.(*T).Run(0xc0016ecfc0, {0x31f3138?, 0x1a3185c5000?}, 0xc0009b9140)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode(0xc0016ecfc0)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:59 +0x367
testing.tRunner(0xc0016ecfc0, 0x3c51e10)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 540 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00179a000, 0xc000111260)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 539
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 192 [IO wait, 102 minutes]:
internal/poll.runtime_pollWait(0x798b4ec67bc0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0000ee280?, 0x900000036?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0000ee280)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0000ee280)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00039bb00)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc00039bb00)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc0017e4000, {0x3f9b790, 0xc00039bb00})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc0017e4000)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 189
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 728 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00023e600, 0xc0017842a0)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 366
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2086 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc00023e480, 0xc001784230)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2101
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2084 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x798b0563b5b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001528180?, 0xc00039f28d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001528180, {0xc00039f28d, 0x573, 0x573})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00011c1f8, {0xc00039f28d?, 0x41835f?, 0x2c42f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00077e1b0, {0x3f63640, 0xc0003a60f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f637c0, 0xc00077e1b0}, {0x3f63640, 0xc0003a60f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00011c1f8?, {0x3f637c0, 0xc00077e1b0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00011c1f8, {0x3f637c0, 0xc00077e1b0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f637c0, 0xc00077e1b0}, {0x3f636c0, 0xc00011c1f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0007b2000?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2101
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 643 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001768180, 0xc001785ce0)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 642
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 536 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008a2f10, 0x23)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001589ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3fc3d20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0016aa3c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4c5c93?, 0xc000659320?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3fae230?, 0xc0001101c0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3fae230, 0xc0001101c0}, 0xc001589f50, {0x3f65240, 0xc0009b8660}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f65240?, 0xc0009b8660?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000736360, 0x3b9aca00, 0x0, 0x1, 0xc0001101c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 569
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 2101 [syscall, 5 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xd, 0xc0009b3a08, 0x4, 0xc001db0120, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc0009b3a36?, 0xc0009b3b60?, 0x5930ab?, 0x7ffc3f7071af?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc0008dc000?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0xc0000bf808?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc00023e480)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc00023e480)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc001db2380, 0xc00023e480)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateNameConflict({0x3fadeb0, 0xc00035c930}, 0xc001db2380, {0xc000518950, 0x10})
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:464 +0x48d
k8s.io/minikube/test/integration.TestMultiNode.func1.1(0xc001db2380?)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:86 +0x6b
testing.tRunner(0xc001db2380, 0xc0007b2000)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1864
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 538 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 537
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (92/166)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.46
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.85
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.79
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
39 TestErrorSpam/start 0.58
40 TestErrorSpam/status 0.84
41 TestErrorSpam/pause 1.29
42 TestErrorSpam/unpause 1.29
43 TestErrorSpam/stop 1.39
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
50 TestFunctional/serial/KubeContext 0.04
54 TestFunctional/serial/CacheCmd/cache/add_remote 2.58
55 TestFunctional/serial/CacheCmd/cache/add_local 0.73
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.46
60 TestFunctional/serial/CacheCmd/cache/delete 0.09
65 TestFunctional/serial/LogsCmd 0.81
66 TestFunctional/serial/LogsFileCmd 0.85
69 TestFunctional/parallel/ConfigCmd 0.32
71 TestFunctional/parallel/DryRun 0.38
72 TestFunctional/parallel/InternationalLanguage 0.19
78 TestFunctional/parallel/AddonsCmd 0.14
81 TestFunctional/parallel/SSHCmd 0.72
82 TestFunctional/parallel/CpCmd 1.84
84 TestFunctional/parallel/FileSync 0.3
85 TestFunctional/parallel/CertSync 1.85
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
93 TestFunctional/parallel/License 0.25
94 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
95 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
96 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
98 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
99 TestFunctional/parallel/Version/short 0.05
100 TestFunctional/parallel/Version/components 0.48
101 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
102 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
103 TestFunctional/parallel/ImageCommands/ImageBuild 2.71
104 TestFunctional/parallel/ImageCommands/Setup 0.52
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
119 TestFunctional/parallel/MountCmd/specific-port 2.01
122 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
128 TestFunctional/parallel/ProfileCmd/profile_list 0.36
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/delete_echo-server_images 0.04
135 TestFunctional/delete_my-image_image 0.02
136 TestFunctional/delete_minikube_cached_images 0.02
164 TestJSONOutput/start/Audit 0
169 TestJSONOutput/pause/Command 0.46
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.43
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 1.22
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.2
188 TestKicCustomNetwork/create_custom_network 28.85
189 TestKicCustomNetwork/use_default_bridge_network 26.02
190 TestKicExistingNetwork 24.07
191 TestKicCustomSubnet 28.72
192 TestKicStaticIP 27.1
193 TestMainNoArgs 0.05
197 TestMountStart/serial/StartWithMountFirst 5.4
198 TestMountStart/serial/VerifyMountFirst 0.25
199 TestMountStart/serial/StartWithMountSecond 5.62
200 TestMountStart/serial/VerifyMountSecond 0.26
201 TestMountStart/serial/DeleteFirst 1.64
202 TestMountStart/serial/VerifyMountPostDelete 0.26
203 TestMountStart/serial/Stop 1.18
204 TestMountStart/serial/RestartStopped 7.22
205 TestMountStart/serial/VerifyMountPostStop 0.25
x
+
TestDownloadOnly/v1.28.0/json-events (5.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-572495 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-572495 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.462327497s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 19:46:42.891324   12851 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1002 19:46:42.891412   12851 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-572495
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-572495: exit status 85 (63.677631ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-572495 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-572495 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 19:46:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:46:37.469783   12863 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:46:37.470041   12863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:46:37.470051   12863 out.go:374] Setting ErrFile to fd 2...
	I1002 19:46:37.470055   12863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:46:37.470241   12863 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	W1002 19:46:37.470367   12863 root.go:315] Error reading config file at /home/jenkins/minikube-integration/21683-9327/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-9327/.minikube/config/config.json: no such file or directory
	I1002 19:46:37.470862   12863 out.go:368] Setting JSON to true
	I1002 19:46:37.471838   12863 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1746,"bootTime":1759432651,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:46:37.471889   12863 start.go:140] virtualization: kvm guest
	I1002 19:46:37.474065   12863 out.go:99] [download-only-572495] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1002 19:46:37.474183   12863 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 19:46:37.474201   12863 notify.go:221] Checking for updates...
	I1002 19:46:37.476361   12863 out.go:171] MINIKUBE_LOCATION=21683
	I1002 19:46:37.477643   12863 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:46:37.478874   12863 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 19:46:37.480080   12863 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 19:46:37.481301   12863 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 19:46:37.483387   12863 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 19:46:37.483578   12863 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:46:37.507290   12863 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 19:46:37.507377   12863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:46:37.895712   12863 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-02 19:46:37.885287084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 19:46:37.895819   12863 docker.go:319] overlay module found
	I1002 19:46:37.897441   12863 out.go:99] Using the docker driver based on user configuration
	I1002 19:46:37.897476   12863 start.go:306] selected driver: docker
	I1002 19:46:37.897481   12863 start.go:936] validating driver "docker" against <nil>
	I1002 19:46:37.897548   12863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:46:37.958317   12863 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-02 19:46:37.947526704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 19:46:37.958456   12863 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 19:46:37.958970   12863 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 19:46:37.959120   12863 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 19:46:37.960808   12863 out.go:171] Using Docker driver with root privileges
	I1002 19:46:37.961854   12863 cni.go:84] Creating CNI manager for ""
	I1002 19:46:37.961937   12863 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 19:46:37.961947   12863 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 19:46:37.962002   12863 start.go:350] cluster config:
	{Name:download-only-572495 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-572495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 19:46:37.963216   12863 out.go:99] Starting "download-only-572495" primary control-plane node in "download-only-572495" cluster
	I1002 19:46:37.963233   12863 cache.go:124] Beginning downloading kic base image for docker with crio
	I1002 19:46:37.964345   12863 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 19:46:37.964375   12863 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 19:46:37.964472   12863 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 19:46:37.982728   12863 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 19:46:37.982915   12863 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 19:46:37.983006   12863 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 19:46:37.985471   12863 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1002 19:46:37.985491   12863 cache.go:59] Caching tarball of preloaded images
	I1002 19:46:37.985631   12863 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 19:46:37.987504   12863 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 19:46:37.987529   12863 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1002 19:46:38.017696   12863 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1002 19:46:38.017831   12863 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1002 19:46:40.959199   12863 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1002 19:46:40.959726   12863 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/download-only-572495/config.json ...
	I1002 19:46:40.959771   12863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/download-only-572495/config.json: {Name:mk02e72e1cb2794fefefa34a16bd34de80b3f93b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:46:40.959934   12863 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1002 19:46:40.960175   12863 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21683-9327/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-572495 host does not exist
	  To start a cluster, run: "minikube start -p download-only-572495"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-572495
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-961266 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-961266 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.844747986s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 19:46:47.153139   12851 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 19:46:47.153178   12851 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-9327/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-961266
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-961266: exit status 85 (60.555035ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-572495 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-572495 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ delete  │ -p download-only-572495                                                                                                                                                   │ download-only-572495 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │ 02 Oct 25 19:46 UTC │
	│ start   │ -o=json --download-only -p download-only-961266 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-961266 │ jenkins │ v1.37.0 │ 02 Oct 25 19:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 19:46:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:46:43.348001   13212 out.go:360] Setting OutFile to fd 1 ...
	I1002 19:46:43.348229   13212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:46:43.348239   13212 out.go:374] Setting ErrFile to fd 2...
	I1002 19:46:43.348243   13212 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 19:46:43.348445   13212 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 19:46:43.348920   13212 out.go:368] Setting JSON to true
	I1002 19:46:43.349759   13212 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1752,"bootTime":1759432651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:46:43.349856   13212 start.go:140] virtualization: kvm guest
	I1002 19:46:43.351899   13212 out.go:99] [download-only-961266] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 19:46:43.352054   13212 notify.go:221] Checking for updates...
	I1002 19:46:43.353165   13212 out.go:171] MINIKUBE_LOCATION=21683
	I1002 19:46:43.354572   13212 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:46:43.355800   13212 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 19:46:43.356843   13212 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 19:46:43.357917   13212 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 19:46:43.360100   13212 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 19:46:43.360336   13212 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 19:46:43.382208   13212 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 19:46:43.382309   13212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:46:43.436098   13212 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-02 19:46:43.426319874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 19:46:43.436185   13212 docker.go:319] overlay module found
	I1002 19:46:43.438020   13212 out.go:99] Using the docker driver based on user configuration
	I1002 19:46:43.438060   13212 start.go:306] selected driver: docker
	I1002 19:46:43.438068   13212 start.go:936] validating driver "docker" against <nil>
	I1002 19:46:43.438135   13212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 19:46:43.491413   13212 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-02 19:46:43.481863279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 19:46:43.491594   13212 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 19:46:43.492092   13212 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 19:46:43.492226   13212 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 19:46:43.494192   13212 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-961266 host does not exist
	  To start a cluster, run: "minikube start -p download-only-961266"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-961266
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-213285 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-213285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-213285
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 19:46:48.216383   12851 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-331754 --alsologtostderr --binary-mirror http://127.0.0.1:42675 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-331754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-331754
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-486748
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-486748: exit status 85 (52.953056ms)

                                                
                                                
-- stdout --
	* Profile "addons-486748" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-486748"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-486748
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-486748: exit status 85 (52.331571ms)

                                                
                                                
-- stdout --
	* Profile "addons-486748" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-486748"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 status: exit status 6 (280.161723ms)

                                                
                                                
-- stdout --
	nospam-547008
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:03:47.839467   24969 status.go:458] kubeconfig endpoint: get endpoint: "nospam-547008" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 status" failed: exit status 6
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 status: exit status 6 (280.879654ms)

                                                
                                                
-- stdout --
	nospam-547008
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:03:48.120366   25080 status.go:458] kubeconfig endpoint: get endpoint: "nospam-547008" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 status" failed: exit status 6
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 status
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 status: exit status 6 (282.749513ms)

                                                
                                                
-- stdout --
	nospam-547008
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:03:48.403257   25190 status.go:458] kubeconfig endpoint: get endpoint: "nospam-547008" does not appear in /home/jenkins/minikube-integration/21683-9327/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 pause
--- PASS: TestErrorSpam/pause (1.29s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 unpause
--- PASS: TestErrorSpam/unpause (1.29s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 stop: (1.208128789s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-547008 --log_dir /tmp/nospam-547008 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-9327/.minikube/files/etc/test/nested/copy/12851/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-753218 /tmp/TestFunctionalserialCacheCmdcacheadd_local2906282939/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 cache add minikube-local-cache-test:functional-753218
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 cache delete minikube-local-cache-test:functional-753218
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-753218
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (264.249778ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs
--- PASS: TestFunctional/serial/LogsCmd (0.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 logs --file /tmp/TestFunctionalserialLogsFileCmd3211587730/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 config get cpus: exit status 14 (50.523955ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 config get cpus: exit status 14 (54.115618ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (167.3964ms)

                                                
                                                
-- stdout --
	* [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:31:01.741562   60489 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:31:01.741920   60489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.741938   60489 out.go:374] Setting ErrFile to fd 2...
	I1002 20:31:01.741946   60489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.742213   60489 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:31:01.742827   60489 out.go:368] Setting JSON to false
	I1002 20:31:01.743931   60489 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4411,"bootTime":1759432651,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:31:01.744056   60489 start.go:140] virtualization: kvm guest
	I1002 20:31:01.746343   60489 out.go:179] * [functional-753218] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:31:01.747898   60489 notify.go:221] Checking for updates...
	I1002 20:31:01.747927   60489 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:31:01.749525   60489 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:31:01.751030   60489 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:31:01.752477   60489 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:31:01.753908   60489 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:31:01.755189   60489 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:31:01.757776   60489 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:31:01.758491   60489 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:31:01.785845   60489 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:31:01.785944   60489 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:31:01.848502   60489 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:01.835691823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:31:01.848605   60489 docker.go:319] overlay module found
	I1002 20:31:01.850605   60489 out.go:179] * Using the docker driver based on existing profile
	I1002 20:31:01.851825   60489 start.go:306] selected driver: docker
	I1002 20:31:01.851845   60489 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:31:01.851928   60489 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:31:01.853699   60489 out.go:203] 
	W1002 20:31:01.854942   60489 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 20:31:01.856223   60489 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753218 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-753218 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (188.788585ms)

                                                
                                                
-- stdout --
	* [functional-753218] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:31:01.267380   60207 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:31:01.267534   60207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.267540   60207 out.go:374] Setting ErrFile to fd 2...
	I1002 20:31:01.267545   60207 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:31:01.268194   60207 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
	I1002 20:31:01.268817   60207 out.go:368] Setting JSON to false
	I1002 20:31:01.269983   60207 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4410,"bootTime":1759432651,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:31:01.270065   60207 start.go:140] virtualization: kvm guest
	I1002 20:31:01.272817   60207 out.go:179] * [functional-753218] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 20:31:01.274347   60207 notify.go:221] Checking for updates...
	I1002 20:31:01.274416   60207 out.go:179]   - MINIKUBE_LOCATION=21683
	I1002 20:31:01.275964   60207 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:31:01.277615   60207 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig
	I1002 20:31:01.280780   60207 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube
	I1002 20:31:01.282281   60207 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:31:01.283865   60207 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:31:01.286822   60207 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:31:01.287445   60207 driver.go:422] Setting default libvirt URI to qemu:///system
	I1002 20:31:01.317909   60207 docker.go:124] docker version: linux-28.4.0:Docker Engine - Community
	I1002 20:31:01.318021   60207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:31:01.386805   60207 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:31:01.372687853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:31:01.386956   60207 docker.go:319] overlay module found
	I1002 20:31:01.389935   60207 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 20:31:01.391411   60207 start.go:306] selected driver: docker
	I1002 20:31:01.391431   60207 start.go:936] validating driver "docker" against &{Name:functional-753218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753218 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:31:01.391544   60207 start.go:947] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:31:01.393625   60207 out.go:203] 
	W1002 20:31:01.395065   60207 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 20:31:01.396524   60207 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh -n functional-753218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 cp functional-753218:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3397024456/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh -n functional-753218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh -n functional-753218 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12851/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo cat /etc/test/nested/copy/12851/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12851.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo cat /etc/ssl/certs/12851.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12851.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo cat /usr/share/ca-certificates/12851.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/128512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo cat /etc/ssl/certs/128512.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/128512.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo cat /usr/share/ca-certificates/128512.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 ssh "sudo systemctl is-active docker": exit status 1 (287.401286ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 ssh "sudo systemctl is-active containerd": exit status 1 (312.462785ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-753218 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-753218 image ls --format short --alsologtostderr:
I1002 20:31:04.521127   62655 out.go:360] Setting OutFile to fd 1 ...
I1002 20:31:04.521373   62655 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:04.521381   62655 out.go:374] Setting ErrFile to fd 2...
I1002 20:31:04.521386   62655 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:04.521553   62655 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
I1002 20:31:04.522140   62655 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:04.522233   62655 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:04.522710   62655 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
I1002 20:31:04.542372   62655 ssh_runner.go:195] Run: systemctl --version
I1002 20:31:04.542411   62655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
I1002 20:31:04.564169   62655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
I1002 20:31:04.667043   62655 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-753218 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-753218 image ls --format table --alsologtostderr:
I1002 20:31:05.420399   63151 out.go:360] Setting OutFile to fd 1 ...
I1002 20:31:05.420665   63151 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:05.420675   63151 out.go:374] Setting ErrFile to fd 2...
I1002 20:31:05.420679   63151 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:05.420943   63151 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
I1002 20:31:05.421517   63151 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:05.421621   63151 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:05.422028   63151 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
I1002 20:31:05.438958   63151 ssh_runner.go:195] Run: systemctl --version
I1002 20:31:05.439010   63151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
I1002 20:31:05.454970   63151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
I1002 20:31:05.555996   63151 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-753218 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9
da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controll
er-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606c
c0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-753218 image ls --format json --alsologtostderr:
I1002 20:31:05.211949   63079 out.go:360] Setting OutFile to fd 1 ...
I1002 20:31:05.212198   63079 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:05.212208   63079 out.go:374] Setting ErrFile to fd 2...
I1002 20:31:05.212212   63079 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:05.212381   63079 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
I1002 20:31:05.212940   63079 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:05.213037   63079 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:05.213423   63079 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
I1002 20:31:05.232291   63079 ssh_runner.go:195] Run: systemctl --version
I1002 20:31:05.232344   63079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
I1002 20:31:05.251299   63079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
I1002 20:31:05.350017   63079 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-753218 image ls --format yaml --alsologtostderr:
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-753218 image ls --format yaml --alsologtostderr:
I1002 20:31:04.996560   62974 out.go:360] Setting OutFile to fd 1 ...
I1002 20:31:04.996868   62974 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:04.996878   62974 out.go:374] Setting ErrFile to fd 2...
I1002 20:31:04.996882   62974 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:04.997092   62974 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
I1002 20:31:04.997683   62974 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:04.997785   62974 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:04.998220   62974 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
I1002 20:31:05.017058   62974 ssh_runner.go:195] Run: systemctl --version
I1002 20:31:05.017113   62974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
I1002 20:31:05.035062   62974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
I1002 20:31:05.135497   62974 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 ssh pgrep buildkitd: exit status 1 (255.011741ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr: (2.249263434s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 06c56bab4b9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-753218
--> 9bfb874ecf4
Successfully tagged localhost/my-image:functional-753218
9bfb874ecf4629ef0411f9ca04dcdfff5daf3facfc0d80d7b852b71634c4665c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-753218 image build -t localhost/my-image:functional-753218 testdata/build --alsologtostderr:
I1002 20:31:05.002382   62980 out.go:360] Setting OutFile to fd 1 ...
I1002 20:31:05.002689   62980 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:05.002699   62980 out.go:374] Setting ErrFile to fd 2...
I1002 20:31:05.002703   62980 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 20:31:05.002934   62980 root.go:339] Updating PATH: /home/jenkins/minikube-integration/21683-9327/.minikube/bin
I1002 20:31:05.003465   62980 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:05.004048   62980 config.go:182] Loaded profile config "functional-753218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 20:31:05.004426   62980 cli_runner.go:164] Run: docker container inspect functional-753218 --format={{.State.Status}}
I1002 20:31:05.023125   62980 ssh_runner.go:195] Run: systemctl --version
I1002 20:31:05.023179   62980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753218
I1002 20:31:05.040777   62980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21683-9327/.minikube/machines/functional-753218/id_rsa Username:docker}
I1002 20:31:05.140034   62980 build_images.go:161] Building image from path: /tmp/build.872927484.tar
I1002 20:31:05.140130   62980 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 20:31:05.147616   62980 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.872927484.tar
I1002 20:31:05.151167   62980 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.872927484.tar: stat -c "%s %y" /var/lib/minikube/build/build.872927484.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.872927484.tar': No such file or directory
I1002 20:31:05.151191   62980 ssh_runner.go:362] scp /tmp/build.872927484.tar --> /var/lib/minikube/build/build.872927484.tar (3072 bytes)
I1002 20:31:05.169201   62980 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.872927484
I1002 20:31:05.177576   62980 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.872927484 -xf /var/lib/minikube/build/build.872927484.tar
I1002 20:31:05.185712   62980 crio.go:315] Building image: /var/lib/minikube/build/build.872927484
I1002 20:31:05.185761   62980 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-753218 /var/lib/minikube/build/build.872927484 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1002 20:31:07.184240   62980 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-753218 /var/lib/minikube/build/build.872927484 --cgroup-manager=cgroupfs: (1.998454654s)
I1002 20:31:07.184302   62980 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.872927484
I1002 20:31:07.191920   62980 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.872927484.tar
I1002 20:31:07.199179   62980 build_images.go:217] Built localhost/my-image:functional-753218 from /tmp/build.872927484.tar
I1002 20:31:07.199210   62980 build_images.go:133] succeeded building to: functional-753218
I1002 20:31:07.199216   62980 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-753218
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-753218 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image rm kicbase/echo-server:functional-753218 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdspecific-port2212612994/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.722861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:30:56.733985   12851 retry.go:31] will retry after 695.329179ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdspecific-port2212612994/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 ssh "sudo umount -f /mount-9p": exit status 1 (272.076452ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-753218 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdspecific-port2212612994/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T" /mount1: exit status 1 (316.215304ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 20:30:58.785108   12851 retry.go:31] will retry after 415.289408ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-753218 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-753218 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753218 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3000626359/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
I1002 20:31:00.732257   12851 retry.go:31] will retry after 5.925927021s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:1330: Took "312.642803ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.84255ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "328.632949ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.354135ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-753218 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-753218
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-753218
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-753218
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-106808 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-106808 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-106808 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-106808 --output=json --user=testUser: (1.220941656s)
--- PASS: TestJSONOutput/stop/Command (1.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-768876 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-768876 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (64.288392ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e004a536-334c-43fd-8821-412735eb6a4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-768876] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc1b25d1-f167-46e7-85bc-a551058fd7fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"662749fa-1c52-4ba4-b95d-a00f68cc315c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"628187d1-2b92-492e-8f03-b496abaac439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-9327/kubeconfig"}}
	{"specversion":"1.0","id":"67559848-8db0-4712-a85c-04a504051014","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-9327/.minikube"}}
	{"specversion":"1.0","id":"d1ceb160-baa0-44cd-b489-b149e96250a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1becadf1-f382-45c4-aca0-511ce64d1a36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"938e5bee-e88d-4fd4-96fc-76c282a4d34a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-768876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-768876
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-966691 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-966691 --network=: (26.735778954s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-966691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-966691
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-966691: (2.092997424s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.85s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-441409 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-441409 --network=bridge: (24.090136536s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-441409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-441409
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-441409: (1.915599712s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.02s)

                                                
                                    
x
+
TestKicExistingNetwork (24.07s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 21:07:47.526165   12851 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 21:07:47.544295   12851 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 21:07:47.544398   12851 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 21:07:47.544415   12851 cli_runner.go:164] Run: docker network inspect existing-network
W1002 21:07:47.561041   12851 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 21:07:47.561072   12851 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 21:07:47.561087   12851 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 21:07:47.561269   12851 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 21:07:47.578942   12851 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000a88510}
I1002 21:07:47.578999   12851 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1002 21:07:47.579046   12851 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 21:07:47.635212   12851 network_create.go:108] docker network existing-network 192.168.49.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-237966 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-237966 --network=existing-network: (22.003905471s)
helpers_test.go:175: Cleaning up "existing-network-237966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-237966
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-237966: (1.926882578s)
I1002 21:08:11.583539   12851 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.07s)

                                                
                                    
x
+
TestKicCustomSubnet (28.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-485760 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-485760 --subnet=192.168.60.0/24: (26.611244869s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-485760 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-485760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-485760
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-485760: (2.084971654s)
--- PASS: TestKicCustomSubnet (28.72s)

                                                
                                    
x
+
TestKicStaticIP (27.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-114906 --static-ip=192.168.200.200
E1002 21:08:57.204440   12851 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-9327/.minikube/profiles/functional-753218/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-114906 --static-ip=192.168.200.200: (24.905294967s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-114906 ip
helpers_test.go:175: Cleaning up "static-ip-114906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-114906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-114906: (2.071514248s)
--- PASS: TestKicStaticIP (27.10s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-639608 --memory=3072 --mount-string /tmp/TestMountStartserial965839639/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-639608 --memory=3072 --mount-string /tmp/TestMountStartserial965839639/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.394976391s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-639608 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-682566 --memory=3072 --mount-string /tmp/TestMountStartserial965839639/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-682566 --memory=3072 --mount-string /tmp/TestMountStartserial965839639/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.618180702s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-682566 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-639608 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-639608 --alsologtostderr -v=5: (1.636693663s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-682566 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-682566
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-682566: (1.182794317s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-682566
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-682566: (6.21846403s)
--- PASS: TestMountStart/serial/RestartStopped (7.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-682566 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    

Test skip (18/166)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
Copied to clipboard